Nov 26 07:08:19 np0005536586 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 26 07:08:19 np0005536586 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 26 07:08:19 np0005536586 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 26 07:08:19 np0005536586 kernel: BIOS-provided physical RAM map:
Nov 26 07:08:19 np0005536586 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 26 07:08:19 np0005536586 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 26 07:08:19 np0005536586 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 26 07:08:19 np0005536586 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Nov 26 07:08:19 np0005536586 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Nov 26 07:08:19 np0005536586 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Nov 26 07:08:19 np0005536586 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Nov 26 07:08:19 np0005536586 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 26 07:08:19 np0005536586 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 26 07:08:19 np0005536586 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable
Nov 26 07:08:19 np0005536586 kernel: NX (Execute Disable) protection: active
Nov 26 07:08:19 np0005536586 kernel: APIC: Static calls initialized
Nov 26 07:08:19 np0005536586 kernel: SMBIOS 2.8 present.
Nov 26 07:08:19 np0005536586 kernel: DMI: Red Hat OpenStack Compute/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Nov 26 07:08:19 np0005536586 kernel: Hypervisor detected: KVM
Nov 26 07:08:19 np0005536586 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 26 07:08:19 np0005536586 kernel: kvm-clock: using sched offset of 3844938102 cycles
Nov 26 07:08:19 np0005536586 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 26 07:08:19 np0005536586 kernel: tsc: Detected 2445.406 MHz processor
Nov 26 07:08:19 np0005536586 kernel: last_pfn = 0x280000 max_arch_pfn = 0x400000000
Nov 26 07:08:19 np0005536586 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 26 07:08:19 np0005536586 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 26 07:08:19 np0005536586 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Nov 26 07:08:19 np0005536586 kernel: found SMP MP-table at [mem 0x000f5b60-0x000f5b6f]
Nov 26 07:08:19 np0005536586 kernel: Using GB pages for direct mapping
Nov 26 07:08:19 np0005536586 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 26 07:08:19 np0005536586 kernel: ACPI: Early table checksum verification disabled
Nov 26 07:08:19 np0005536586 kernel: ACPI: RSDP 0x00000000000F5B20 000014 (v00 BOCHS )
Nov 26 07:08:19 np0005536586 kernel: ACPI: RSDT 0x000000007FFE35EB 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 07:08:19 np0005536586 kernel: ACPI: FACP 0x000000007FFE3403 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 07:08:19 np0005536586 kernel: ACPI: DSDT 0x000000007FFDFCC0 003743 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 07:08:19 np0005536586 kernel: ACPI: FACS 0x000000007FFDFC80 000040
Nov 26 07:08:19 np0005536586 kernel: ACPI: APIC 0x000000007FFE34F7 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 07:08:19 np0005536586 kernel: ACPI: MCFG 0x000000007FFE3587 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 07:08:19 np0005536586 kernel: ACPI: WAET 0x000000007FFE35C3 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 07:08:19 np0005536586 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe3403-0x7ffe34f6]
Nov 26 07:08:19 np0005536586 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfcc0-0x7ffe3402]
Nov 26 07:08:19 np0005536586 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfc80-0x7ffdfcbf]
Nov 26 07:08:19 np0005536586 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe34f7-0x7ffe3586]
Nov 26 07:08:19 np0005536586 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe3587-0x7ffe35c2]
Nov 26 07:08:19 np0005536586 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe35c3-0x7ffe35ea]
Nov 26 07:08:19 np0005536586 kernel: No NUMA configuration found
Nov 26 07:08:19 np0005536586 kernel: Faking a node at [mem 0x0000000000000000-0x000000027fffffff]
Nov 26 07:08:19 np0005536586 kernel: NODE_DATA(0) allocated [mem 0x27ffd3000-0x27fffdfff]
Nov 26 07:08:19 np0005536586 kernel: crashkernel reserved: 0x000000006f000000 - 0x000000007f000000 (256 MB)
Nov 26 07:08:19 np0005536586 kernel: Zone ranges:
Nov 26 07:08:19 np0005536586 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 26 07:08:19 np0005536586 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 26 07:08:19 np0005536586 kernel:  Normal   [mem 0x0000000100000000-0x000000027fffffff]
Nov 26 07:08:19 np0005536586 kernel:  Device   empty
Nov 26 07:08:19 np0005536586 kernel: Movable zone start for each node
Nov 26 07:08:19 np0005536586 kernel: Early memory node ranges
Nov 26 07:08:19 np0005536586 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 26 07:08:19 np0005536586 kernel:  node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Nov 26 07:08:19 np0005536586 kernel:  node   0: [mem 0x0000000100000000-0x000000027fffffff]
Nov 26 07:08:19 np0005536586 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff]
Nov 26 07:08:19 np0005536586 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 26 07:08:19 np0005536586 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 26 07:08:19 np0005536586 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 26 07:08:19 np0005536586 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 26 07:08:19 np0005536586 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 26 07:08:19 np0005536586 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 26 07:08:19 np0005536586 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 26 07:08:19 np0005536586 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 26 07:08:19 np0005536586 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 26 07:08:19 np0005536586 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 26 07:08:19 np0005536586 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 26 07:08:19 np0005536586 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 26 07:08:19 np0005536586 kernel: TSC deadline timer available
Nov 26 07:08:19 np0005536586 kernel: CPU topo: Max. logical packages:   4
Nov 26 07:08:19 np0005536586 kernel: CPU topo: Max. logical dies:       4
Nov 26 07:08:19 np0005536586 kernel: CPU topo: Max. dies per package:   1
Nov 26 07:08:19 np0005536586 kernel: CPU topo: Max. threads per core:   1
Nov 26 07:08:19 np0005536586 kernel: CPU topo: Num. cores per package:     1
Nov 26 07:08:19 np0005536586 kernel: CPU topo: Num. threads per package:   1
Nov 26 07:08:19 np0005536586 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs
Nov 26 07:08:19 np0005536586 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 26 07:08:19 np0005536586 kernel: kvm-guest: KVM setup pv remote TLB flush
Nov 26 07:08:19 np0005536586 kernel: kvm-guest: setup PV sched yield
Nov 26 07:08:19 np0005536586 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 26 07:08:19 np0005536586 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 26 07:08:19 np0005536586 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 26 07:08:19 np0005536586 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 26 07:08:19 np0005536586 kernel: PM: hibernation: Registered nosave memory: [mem 0x7ffdb000-0x7fffffff]
Nov 26 07:08:19 np0005536586 kernel: PM: hibernation: Registered nosave memory: [mem 0x80000000-0xafffffff]
Nov 26 07:08:19 np0005536586 kernel: PM: hibernation: Registered nosave memory: [mem 0xb0000000-0xbfffffff]
Nov 26 07:08:19 np0005536586 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfed1bfff]
Nov 26 07:08:19 np0005536586 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff]
Nov 26 07:08:19 np0005536586 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfeffbfff]
Nov 26 07:08:19 np0005536586 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 26 07:08:19 np0005536586 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 26 07:08:19 np0005536586 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 26 07:08:19 np0005536586 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices
Nov 26 07:08:19 np0005536586 kernel: Booting paravirtualized kernel on KVM
Nov 26 07:08:19 np0005536586 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 26 07:08:19 np0005536586 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Nov 26 07:08:19 np0005536586 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u524288
Nov 26 07:08:19 np0005536586 kernel: kvm-guest: PV spinlocks enabled
Nov 26 07:08:19 np0005536586 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Nov 26 07:08:19 np0005536586 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 26 07:08:19 np0005536586 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 26 07:08:19 np0005536586 kernel: random: crng init done
Nov 26 07:08:19 np0005536586 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 26 07:08:19 np0005536586 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 26 07:08:19 np0005536586 kernel: Fallback order for Node 0: 0 
Nov 26 07:08:19 np0005536586 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 26 07:08:19 np0005536586 kernel: Policy zone: Normal
Nov 26 07:08:19 np0005536586 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 26 07:08:19 np0005536586 kernel: software IO TLB: area num 4.
Nov 26 07:08:19 np0005536586 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Nov 26 07:08:19 np0005536586 kernel: ftrace: allocating 49313 entries in 193 pages
Nov 26 07:08:19 np0005536586 kernel: ftrace: allocated 193 pages with 3 groups
Nov 26 07:08:19 np0005536586 kernel: Dynamic Preempt: voluntary
Nov 26 07:08:19 np0005536586 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 26 07:08:19 np0005536586 kernel: rcu: #011RCU event tracing is enabled.
Nov 26 07:08:19 np0005536586 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4.
Nov 26 07:08:19 np0005536586 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 26 07:08:19 np0005536586 kernel: #011Rude variant of Tasks RCU enabled.
Nov 26 07:08:19 np0005536586 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 26 07:08:19 np0005536586 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 26 07:08:19 np0005536586 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Nov 26 07:08:19 np0005536586 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 26 07:08:19 np0005536586 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 26 07:08:19 np0005536586 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 26 07:08:19 np0005536586 kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16
Nov 26 07:08:19 np0005536586 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 26 07:08:19 np0005536586 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 26 07:08:19 np0005536586 kernel: Console: colour VGA+ 80x25
Nov 26 07:08:19 np0005536586 kernel: printk: console [ttyS0] enabled
Nov 26 07:08:19 np0005536586 kernel: ACPI: Core revision 20230331
Nov 26 07:08:19 np0005536586 kernel: APIC: Switch to symmetric I/O mode setup
Nov 26 07:08:19 np0005536586 kernel: x2apic enabled
Nov 26 07:08:19 np0005536586 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 26 07:08:19 np0005536586 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Nov 26 07:08:19 np0005536586 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Nov 26 07:08:19 np0005536586 kernel: kvm-guest: setup PV IPIs
Nov 26 07:08:19 np0005536586 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 26 07:08:19 np0005536586 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406)
Nov 26 07:08:19 np0005536586 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 26 07:08:19 np0005536586 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 26 07:08:19 np0005536586 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 26 07:08:19 np0005536586 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 26 07:08:19 np0005536586 kernel: Spectre V2 : Mitigation: Retpolines
Nov 26 07:08:19 np0005536586 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 26 07:08:19 np0005536586 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Nov 26 07:08:19 np0005536586 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 26 07:08:19 np0005536586 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 26 07:08:19 np0005536586 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 26 07:08:19 np0005536586 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 26 07:08:19 np0005536586 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 26 07:08:19 np0005536586 kernel: Transient Scheduler Attacks: Vulnerable: No microcode
Nov 26 07:08:19 np0005536586 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 26 07:08:19 np0005536586 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 26 07:08:19 np0005536586 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 26 07:08:19 np0005536586 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Nov 26 07:08:19 np0005536586 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 26 07:08:19 np0005536586 kernel: x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
Nov 26 07:08:19 np0005536586 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
Nov 26 07:08:19 np0005536586 kernel: Freeing SMP alternatives memory: 40K
Nov 26 07:08:19 np0005536586 kernel: pid_max: default: 32768 minimum: 301
Nov 26 07:08:19 np0005536586 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 26 07:08:19 np0005536586 kernel: landlock: Up and running.
Nov 26 07:08:19 np0005536586 kernel: Yama: becoming mindful.
Nov 26 07:08:19 np0005536586 kernel: SELinux:  Initializing.
Nov 26 07:08:19 np0005536586 kernel: LSM support for eBPF active
Nov 26 07:08:19 np0005536586 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 26 07:08:19 np0005536586 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 26 07:08:19 np0005536586 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1)
Nov 26 07:08:19 np0005536586 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 26 07:08:19 np0005536586 kernel: ... version:                0
Nov 26 07:08:19 np0005536586 kernel: ... bit width:              48
Nov 26 07:08:19 np0005536586 kernel: ... generic registers:      6
Nov 26 07:08:19 np0005536586 kernel: ... value mask:             0000ffffffffffff
Nov 26 07:08:19 np0005536586 kernel: ... max period:             00007fffffffffff
Nov 26 07:08:19 np0005536586 kernel: ... fixed-purpose events:   0
Nov 26 07:08:19 np0005536586 kernel: ... event mask:             000000000000003f
Nov 26 07:08:19 np0005536586 kernel: signal: max sigframe size: 3376
Nov 26 07:08:19 np0005536586 kernel: rcu: Hierarchical SRCU implementation.
Nov 26 07:08:19 np0005536586 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 26 07:08:19 np0005536586 kernel: smp: Bringing up secondary CPUs ...
Nov 26 07:08:19 np0005536586 kernel: smpboot: x86: Booting SMP configuration:
Nov 26 07:08:19 np0005536586 kernel: .... node  #0, CPUs:      #1 #2 #3
Nov 26 07:08:19 np0005536586 kernel: smp: Brought up 1 node, 4 CPUs
Nov 26 07:08:19 np0005536586 kernel: smpboot: Total of 4 processors activated (19563.24 BogoMIPS)
Nov 26 07:08:19 np0005536586 kernel: node 0 deferred pages initialised in 8ms
Nov 26 07:08:19 np0005536586 kernel: Memory: 7768176K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 615228K reserved, 0K cma-reserved)
Nov 26 07:08:19 np0005536586 kernel: devtmpfs: initialized
Nov 26 07:08:19 np0005536586 kernel: x86/mm: Memory block size: 128MB
Nov 26 07:08:19 np0005536586 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 26 07:08:19 np0005536586 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Nov 26 07:08:19 np0005536586 kernel: pinctrl core: initialized pinctrl subsystem
Nov 26 07:08:19 np0005536586 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 26 07:08:19 np0005536586 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 26 07:08:19 np0005536586 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 26 07:08:19 np0005536586 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 26 07:08:19 np0005536586 kernel: audit: initializing netlink subsys (disabled)
Nov 26 07:08:19 np0005536586 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 26 07:08:19 np0005536586 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 26 07:08:19 np0005536586 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 26 07:08:19 np0005536586 kernel: audit: type=2000 audit(1764158898.754:1): state=initialized audit_enabled=0 res=1
Nov 26 07:08:19 np0005536586 kernel: cpuidle: using governor menu
Nov 26 07:08:19 np0005536586 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 26 07:08:19 np0005536586 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff]
Nov 26 07:08:19 np0005536586 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry
Nov 26 07:08:19 np0005536586 kernel: PCI: Using configuration type 1 for base access
Nov 26 07:08:19 np0005536586 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 26 07:08:19 np0005536586 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 26 07:08:19 np0005536586 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 26 07:08:19 np0005536586 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 26 07:08:19 np0005536586 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 26 07:08:19 np0005536586 kernel: Demotion targets for Node 0: null
Nov 26 07:08:19 np0005536586 kernel: cryptd: max_cpu_qlen set to 1000
Nov 26 07:08:19 np0005536586 kernel: ACPI: Added _OSI(Module Device)
Nov 26 07:08:19 np0005536586 kernel: ACPI: Added _OSI(Processor Device)
Nov 26 07:08:19 np0005536586 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 26 07:08:19 np0005536586 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 26 07:08:19 np0005536586 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 26 07:08:19 np0005536586 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 26 07:08:19 np0005536586 kernel: ACPI: Interpreter enabled
Nov 26 07:08:19 np0005536586 kernel: ACPI: PM: (supports S0 S5)
Nov 26 07:08:19 np0005536586 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 26 07:08:19 np0005536586 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 26 07:08:19 np0005536586 kernel: PCI: Using E820 reservations for host bridge windows
Nov 26 07:08:19 np0005536586 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Nov 26 07:08:19 np0005536586 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 26 07:08:19 np0005536586 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 26 07:08:19 np0005536586 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR DPC]
Nov 26 07:08:19 np0005536586 kernel: acpi PNP0A08:00: _OSC: OS now controls [SHPCHotplug PME AER PCIeCapability]
Nov 26 07:08:19 np0005536586 kernel: PCI host bridge to bus 0000:00
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:00: root bus resource [mem 0x280000000-0xa7fffffff window]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:01.0: BAR 0 [mem 0xf9800000-0xf9ffffff pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfc200000-0xfc203fff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.1: BAR 0 [mem 0xfea1a000-0xfea1afff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.2: BAR 0 [mem 0xfea1b000-0xfea1bfff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.3: BAR 0 [mem 0xfea1c000-0xfea1cfff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.4: BAR 0 [mem 0xfea1d000-0xfea1dfff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.5: BAR 0 [mem 0xfea1e000-0xfea1efff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.6: BAR 0 [mem 0xfea1f000-0xfea1ffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.7: BAR 0 [mem 0xfea20000-0xfea20fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:04.0: BAR 0 [mem 0xfea21000-0xfea21fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:1f.2: BAR 4 [io  0xd040-0xd05f]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea22000-0xfea22fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:1f.3: BAR 4 [io  0x0700-0x073f]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge
Nov 26 07:08:19 np0005536586 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfc800000-0xfc8000ff 64bit]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:02: extended config space not accessible
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [0] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [1] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [2] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [3] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [4] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [5] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [6] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [7] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [8] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [9] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [10] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [11] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [12] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [13] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [14] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [15] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [16] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [17] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [18] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [19] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [20] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [21] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [22] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [23] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [24] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [25] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [26] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [27] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [28] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [29] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [30] registered
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [31] registered
Nov 26 07:08:19 np0005536586 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 26 07:08:19 np0005536586 kernel: pci 0000:02:01.0: BAR 4 [io  0xc000-0xc01f]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [0-2] registered
Nov 26 07:08:19 np0005536586 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Nov 26 07:08:19 np0005536586 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe840000-0xfe840fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfbe00000-0xfbe03fff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:03:00.0: ROM [mem 0xfe800000-0xfe83ffff pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [0-3] registered
Nov 26 07:08:19 np0005536586 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint
Nov 26 07:08:19 np0005536586 kernel: pci 0000:04:00.0: BAR 1 [mem 0xfe600000-0xfe600fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfbc00000-0xfbc03fff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [0-4] registered
Nov 26 07:08:19 np0005536586 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint
Nov 26 07:08:19 np0005536586 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfba00000-0xfba03fff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [0-5] registered
Nov 26 07:08:19 np0005536586 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint
Nov 26 07:08:19 np0005536586 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfb800000-0xfb803fff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [0-6] registered
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [0-7] registered
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [0-8] registered
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [0-9] registered
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [0-10] registered
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [0-11] registered
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [0-12] registered
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [0-13] registered
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [0-14] registered
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [0-15] registered
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [0-16] registered
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Nov 26 07:08:19 np0005536586 kernel: acpiphp: Slot [0-17] registered
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Nov 26 07:08:19 np0005536586 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 26 07:08:19 np0005536586 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 26 07:08:19 np0005536586 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 26 07:08:19 np0005536586 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 26 07:08:19 np0005536586 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Nov 26 07:08:19 np0005536586 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Nov 26 07:08:19 np0005536586 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Nov 26 07:08:19 np0005536586 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Nov 26 07:08:19 np0005536586 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Nov 26 07:08:19 np0005536586 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Nov 26 07:08:19 np0005536586 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Nov 26 07:08:19 np0005536586 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Nov 26 07:08:19 np0005536586 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Nov 26 07:08:19 np0005536586 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Nov 26 07:08:19 np0005536586 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Nov 26 07:08:19 np0005536586 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Nov 26 07:08:19 np0005536586 kernel: iommu: Default domain type: Translated
Nov 26 07:08:19 np0005536586 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 26 07:08:19 np0005536586 kernel: SCSI subsystem initialized
Nov 26 07:08:19 np0005536586 kernel: ACPI: bus type USB registered
Nov 26 07:08:19 np0005536586 kernel: usbcore: registered new interface driver usbfs
Nov 26 07:08:19 np0005536586 kernel: usbcore: registered new interface driver hub
Nov 26 07:08:19 np0005536586 kernel: usbcore: registered new device driver usb
Nov 26 07:08:19 np0005536586 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 26 07:08:19 np0005536586 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 26 07:08:19 np0005536586 kernel: PTP clock support registered
Nov 26 07:08:19 np0005536586 kernel: EDAC MC: Ver: 3.0.0
Nov 26 07:08:19 np0005536586 kernel: NetLabel: Initializing
Nov 26 07:08:19 np0005536586 kernel: NetLabel:  domain hash size = 128
Nov 26 07:08:19 np0005536586 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 26 07:08:19 np0005536586 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 26 07:08:19 np0005536586 kernel: PCI: Using ACPI for IRQ routing
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 26 07:08:19 np0005536586 kernel: vgaarb: loaded
Nov 26 07:08:19 np0005536586 kernel: clocksource: Switched to clocksource kvm-clock
Nov 26 07:08:19 np0005536586 kernel: VFS: Disk quotas dquot_6.6.0
Nov 26 07:08:19 np0005536586 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 26 07:08:19 np0005536586 kernel: pnp: PnP ACPI init
Nov 26 07:08:19 np0005536586 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved
Nov 26 07:08:19 np0005536586 kernel: pnp: PnP ACPI: found 5 devices
Nov 26 07:08:19 np0005536586 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 26 07:08:19 np0005536586 kernel: NET: Registered PF_INET protocol family
Nov 26 07:08:19 np0005536586 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 26 07:08:19 np0005536586 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 26 07:08:19 np0005536586 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 26 07:08:19 np0005536586 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 26 07:08:19 np0005536586 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 26 07:08:19 np0005536586 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 26 07:08:19 np0005536586 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 26 07:08:19 np0005536586 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 26 07:08:19 np0005536586 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 26 07:08:19 np0005536586 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 26 07:08:19 np0005536586 kernel: NET: Registered PF_XDP protocol family
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.1: bridge window [io  0x1000-0x0fff] to [bus 0b] add_size 1000
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.2: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.3: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.4: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.5: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.6: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.7: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x1fff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.2: bridge window [io  0x2000-0x2fff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.3: bridge window [io  0x3000-0x3fff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.4: bridge window [io  0x4000-0x4fff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.5: bridge window [io  0x5000-0x5fff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.6: bridge window [io  0x6000-0x6fff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.7: bridge window [io  0x7000-0x7fff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.0: bridge window [io  0x8000-0x8fff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.1: bridge window [io  0x9000-0x9fff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.2: bridge window [io  0xa000-0xafff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.3: bridge window [io  0xb000-0xbfff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.4: bridge window [io  0xe000-0xefff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.5: bridge window [io  0xf000-0xffff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: can't assign; no space
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: failed to assign
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: can't assign; no space
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: failed to assign
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: can't assign; no space
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: failed to assign
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x1fff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.7: bridge window [io  0x2000-0x2fff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.6: bridge window [io  0x3000-0x3fff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.5: bridge window [io  0x4000-0x4fff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.4: bridge window [io  0x5000-0x5fff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.3: bridge window [io  0x6000-0x6fff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.2: bridge window [io  0x7000-0x7fff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.1: bridge window [io  0x8000-0x8fff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.0: bridge window [io  0x9000-0x9fff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.7: bridge window [io  0xa000-0xafff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.6: bridge window [io  0xb000-0xbfff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.5: bridge window [io  0xe000-0xefff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.4: bridge window [io  0xf000-0xffff]: assigned
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: can't assign; no space
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: failed to assign
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: can't assign; no space
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: failed to assign
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: can't assign; no space
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: failed to assign
Nov 26 07:08:19 np0005536586 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.4:   bridge window [io  0xf000-0xffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.5:   bridge window [io  0xe000-0xefff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.6:   bridge window [io  0xb000-0xbfff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.7:   bridge window [io  0xa000-0xafff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.1:   bridge window [io  0x8000-0x8fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.2:   bridge window [io  0x7000-0x7fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.3:   bridge window [io  0x6000-0x6fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.4:   bridge window [io  0x5000-0x5fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.5:   bridge window [io  0x4000-0x4fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.6:   bridge window [io  0x3000-0x3fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.7:   bridge window [io  0x2000-0x2fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:04.0:   bridge window [io  0x1000-0x1fff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Nov 26 07:08:19 np0005536586 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:00: resource 9 [mem 0x280000000-0xa7fffffff window]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:01: resource 0 [io  0xc000-0xcfff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:01: resource 1 [mem 0xfc600000-0xfc9fffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:01: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:02: resource 1 [mem 0xfc600000-0xfc7fffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:02: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:03: resource 2 [mem 0xfbe00000-0xfbffffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:04: resource 2 [mem 0xfbc00000-0xfbdfffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:05: resource 2 [mem 0xfba00000-0xfbbfffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:06: resource 0 [io  0xf000-0xffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:06: resource 2 [mem 0xfb800000-0xfb9fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:07: resource 0 [io  0xe000-0xefff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:07: resource 2 [mem 0xfb600000-0xfb7fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:08: resource 0 [io  0xb000-0xbfff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:08: resource 2 [mem 0xfb400000-0xfb5fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:09: resource 0 [io  0xa000-0xafff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:09: resource 2 [mem 0xfb200000-0xfb3fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:0a: resource 0 [io  0x9000-0x9fff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:0a: resource 1 [mem 0xfda00000-0xfdbfffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:0a: resource 2 [mem 0xfb000000-0xfb1fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:0b: resource 0 [io  0x8000-0x8fff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd800000-0xfd9fffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:0b: resource 2 [mem 0xfae00000-0xfaffffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:0c: resource 0 [io  0x7000-0x7fff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd600000-0xfd7fffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:0c: resource 2 [mem 0xfac00000-0xfadfffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:0d: resource 0 [io  0x6000-0x6fff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:0d: resource 1 [mem 0xfd400000-0xfd5fffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:0d: resource 2 [mem 0xfaa00000-0xfabfffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:0e: resource 0 [io  0x5000-0x5fff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:0e: resource 1 [mem 0xfd200000-0xfd3fffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:0e: resource 2 [mem 0xfa800000-0xfa9fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:0f: resource 0 [io  0x4000-0x4fff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:0f: resource 1 [mem 0xfd000000-0xfd1fffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:0f: resource 2 [mem 0xfa600000-0xfa7fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:10: resource 0 [io  0x3000-0x3fff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:10: resource 1 [mem 0xfce00000-0xfcffffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:10: resource 2 [mem 0xfa400000-0xfa5fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:11: resource 0 [io  0x2000-0x2fff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:11: resource 1 [mem 0xfcc00000-0xfcdfffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:11: resource 2 [mem 0xfa200000-0xfa3fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:12: resource 0 [io  0x1000-0x1fff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:12: resource 1 [mem 0xfca00000-0xfcbfffff]
Nov 26 07:08:19 np0005536586 kernel: pci_bus 0000:12: resource 2 [mem 0xfa000000-0xfa1fffff 64bit pref]
Nov 26 07:08:19 np0005536586 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Nov 26 07:08:19 np0005536586 kernel: PCI: CLS 0 bytes, default 64
Nov 26 07:08:19 np0005536586 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 26 07:08:19 np0005536586 kernel: software IO TLB: mapped [mem 0x000000006b000000-0x000000006f000000] (64MB)
Nov 26 07:08:19 np0005536586 kernel: Trying to unpack rootfs image as initramfs...
Nov 26 07:08:19 np0005536586 kernel: ACPI: bus type thunderbolt registered
Nov 26 07:08:19 np0005536586 kernel: Initialise system trusted keyrings
Nov 26 07:08:19 np0005536586 kernel: Key type blacklist registered
Nov 26 07:08:19 np0005536586 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 26 07:08:19 np0005536586 kernel: zbud: loaded
Nov 26 07:08:19 np0005536586 kernel: integrity: Platform Keyring initialized
Nov 26 07:08:19 np0005536586 kernel: integrity: Machine keyring initialized
Nov 26 07:08:19 np0005536586 kernel: Freeing initrd memory: 85868K
Nov 26 07:08:19 np0005536586 kernel: NET: Registered PF_ALG protocol family
Nov 26 07:08:19 np0005536586 kernel: xor: automatically using best checksumming function   avx       
Nov 26 07:08:19 np0005536586 kernel: Key type asymmetric registered
Nov 26 07:08:19 np0005536586 kernel: Asymmetric key parser 'x509' registered
Nov 26 07:08:19 np0005536586 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 26 07:08:19 np0005536586 kernel: io scheduler mq-deadline registered
Nov 26 07:08:19 np0005536586 kernel: io scheduler kyber registered
Nov 26 07:08:19 np0005536586 kernel: io scheduler bfq registered
Nov 26 07:08:19 np0005536586 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31
Nov 26 07:08:19 np0005536586 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39
Nov 26 07:08:19 np0005536586 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40
Nov 26 07:08:19 np0005536586 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40
Nov 26 07:08:19 np0005536586 kernel: shpchp 0000:01:00.0: HPC vendor_id 1b36 device_id e ss_vid 0 ss_did 0
Nov 26 07:08:19 np0005536586 kernel: shpchp 0000:01:00.0: pci_hp_register failed with error -16
Nov 26 07:08:19 np0005536586 kernel: shpchp 0000:01:00.0: Slot initialization failed
Nov 26 07:08:19 np0005536586 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 26 07:08:19 np0005536586 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 26 07:08:19 np0005536586 kernel: ACPI: button: Power Button [PWRF]
Nov 26 07:08:19 np0005536586 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21
Nov 26 07:08:19 np0005536586 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 26 07:08:19 np0005536586 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 26 07:08:19 np0005536586 kernel: Non-volatile memory driver v1.3
Nov 26 07:08:19 np0005536586 kernel: rdac: device handler registered
Nov 26 07:08:19 np0005536586 kernel: hp_sw: device handler registered
Nov 26 07:08:19 np0005536586 kernel: emc: device handler registered
Nov 26 07:08:19 np0005536586 kernel: alua: device handler registered
Nov 26 07:08:19 np0005536586 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller
Nov 26 07:08:19 np0005536586 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1
Nov 26 07:08:19 np0005536586 kernel: uhci_hcd 0000:02:01.0: detected 2 ports
Nov 26 07:08:19 np0005536586 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x0000c000
Nov 26 07:08:19 np0005536586 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 26 07:08:19 np0005536586 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 26 07:08:19 np0005536586 kernel: usb usb1: Product: UHCI Host Controller
Nov 26 07:08:19 np0005536586 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 26 07:08:19 np0005536586 kernel: usb usb1: SerialNumber: 0000:02:01.0
Nov 26 07:08:19 np0005536586 kernel: hub 1-0:1.0: USB hub found
Nov 26 07:08:19 np0005536586 kernel: hub 1-0:1.0: 2 ports detected
Nov 26 07:08:19 np0005536586 kernel: usbcore: registered new interface driver usbserial_generic
Nov 26 07:08:19 np0005536586 kernel: usbserial: USB Serial support registered for generic
Nov 26 07:08:19 np0005536586 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 26 07:08:19 np0005536586 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 26 07:08:19 np0005536586 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 26 07:08:19 np0005536586 kernel: mousedev: PS/2 mouse device common for all mice
Nov 26 07:08:19 np0005536586 kernel: rtc_cmos 00:03: RTC can wake from S4
Nov 26 07:08:19 np0005536586 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 26 07:08:19 np0005536586 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 26 07:08:19 np0005536586 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 26 07:08:19 np0005536586 kernel: rtc_cmos 00:03: registered as rtc0
Nov 26 07:08:19 np0005536586 kernel: rtc_cmos 00:03: setting system clock to 2025-11-26T12:08:19 UTC (1764158899)
Nov 26 07:08:19 np0005536586 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Nov 26 07:08:19 np0005536586 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 26 07:08:19 np0005536586 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 26 07:08:19 np0005536586 kernel: usbcore: registered new interface driver usbhid
Nov 26 07:08:19 np0005536586 kernel: usbhid: USB HID core driver
Nov 26 07:08:19 np0005536586 kernel: drop_monitor: Initializing network drop monitor service
Nov 26 07:08:19 np0005536586 kernel: Initializing XFRM netlink socket
Nov 26 07:08:19 np0005536586 kernel: NET: Registered PF_INET6 protocol family
Nov 26 07:08:19 np0005536586 kernel: Segment Routing with IPv6
Nov 26 07:08:19 np0005536586 kernel: NET: Registered PF_PACKET protocol family
Nov 26 07:08:19 np0005536586 kernel: mpls_gso: MPLS GSO support
Nov 26 07:08:19 np0005536586 kernel: IPI shorthand broadcast: enabled
Nov 26 07:08:19 np0005536586 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 26 07:08:19 np0005536586 kernel: AES CTR mode by8 optimization enabled
Nov 26 07:08:19 np0005536586 kernel: sched_clock: Marking stable (1140001870, 142016847)->(1348772546, -66753829)
Nov 26 07:08:19 np0005536586 kernel: registered taskstats version 1
Nov 26 07:08:19 np0005536586 kernel: Loading compiled-in X.509 certificates
Nov 26 07:08:19 np0005536586 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 26 07:08:19 np0005536586 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 26 07:08:19 np0005536586 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 26 07:08:19 np0005536586 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 26 07:08:19 np0005536586 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 26 07:08:19 np0005536586 kernel: Demotion targets for Node 0: null
Nov 26 07:08:19 np0005536586 kernel: page_owner is disabled
Nov 26 07:08:19 np0005536586 kernel: Key type .fscrypt registered
Nov 26 07:08:19 np0005536586 kernel: Key type fscrypt-provisioning registered
Nov 26 07:08:19 np0005536586 kernel: Key type big_key registered
Nov 26 07:08:19 np0005536586 kernel: Key type encrypted registered
Nov 26 07:08:19 np0005536586 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 26 07:08:19 np0005536586 kernel: Loading compiled-in module X.509 certificates
Nov 26 07:08:19 np0005536586 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 26 07:08:19 np0005536586 kernel: ima: Allocated hash algorithm: sha256
Nov 26 07:08:19 np0005536586 kernel: ima: No architecture policies found
Nov 26 07:08:19 np0005536586 kernel: evm: Initialising EVM extended attributes:
Nov 26 07:08:19 np0005536586 kernel: evm: security.selinux
Nov 26 07:08:19 np0005536586 kernel: evm: security.SMACK64 (disabled)
Nov 26 07:08:19 np0005536586 kernel: evm: security.SMACK64EXEC (disabled)
Nov 26 07:08:19 np0005536586 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 26 07:08:19 np0005536586 kernel: evm: security.SMACK64MMAP (disabled)
Nov 26 07:08:19 np0005536586 kernel: evm: security.apparmor (disabled)
Nov 26 07:08:19 np0005536586 kernel: evm: security.ima
Nov 26 07:08:19 np0005536586 kernel: evm: security.capability
Nov 26 07:08:19 np0005536586 kernel: evm: HMAC attrs: 0x1
Nov 26 07:08:19 np0005536586 kernel: Running certificate verification RSA selftest
Nov 26 07:08:19 np0005536586 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 26 07:08:19 np0005536586 kernel: Running certificate verification ECDSA selftest
Nov 26 07:08:19 np0005536586 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 26 07:08:19 np0005536586 kernel: clk: Disabling unused clocks
Nov 26 07:08:19 np0005536586 kernel: Freeing unused decrypted memory: 2028K
Nov 26 07:08:19 np0005536586 kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 26 07:08:19 np0005536586 kernel: Write protecting the kernel read-only data: 30720k
Nov 26 07:08:19 np0005536586 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 26 07:08:19 np0005536586 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 26 07:08:19 np0005536586 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 26 07:08:19 np0005536586 kernel: Run /init as init process
Nov 26 07:08:19 np0005536586 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 26 07:08:19 np0005536586 systemd: Detected virtualization kvm.
Nov 26 07:08:19 np0005536586 systemd: Detected architecture x86-64.
Nov 26 07:08:19 np0005536586 systemd: Running in initrd.
Nov 26 07:08:19 np0005536586 systemd: No hostname configured, using default hostname.
Nov 26 07:08:19 np0005536586 systemd: Hostname set to <localhost>.
Nov 26 07:08:19 np0005536586 systemd: Initializing machine ID from VM UUID.
Nov 26 07:08:19 np0005536586 systemd: Queued start job for default target Initrd Default Target.
Nov 26 07:08:19 np0005536586 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 26 07:08:19 np0005536586 systemd: Reached target Local Encrypted Volumes.
Nov 26 07:08:19 np0005536586 systemd: Reached target Initrd /usr File System.
Nov 26 07:08:19 np0005536586 systemd: Reached target Local File Systems.
Nov 26 07:08:19 np0005536586 systemd: Reached target Path Units.
Nov 26 07:08:19 np0005536586 systemd: Reached target Slice Units.
Nov 26 07:08:19 np0005536586 systemd: Reached target Swaps.
Nov 26 07:08:19 np0005536586 systemd: Reached target Timer Units.
Nov 26 07:08:19 np0005536586 systemd: Listening on D-Bus System Message Bus Socket.
Nov 26 07:08:19 np0005536586 systemd: Listening on Journal Socket (/dev/log).
Nov 26 07:08:19 np0005536586 systemd: Listening on Journal Socket.
Nov 26 07:08:19 np0005536586 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 26 07:08:19 np0005536586 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 26 07:08:19 np0005536586 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 26 07:08:19 np0005536586 kernel: usb 1-1: Manufacturer: QEMU
Nov 26 07:08:19 np0005536586 kernel: usb 1-1: SerialNumber: 28754-0000:00:02.0:00.0:01.0-1
Nov 26 07:08:19 np0005536586 systemd: Listening on udev Control Socket.
Nov 26 07:08:19 np0005536586 systemd: Listening on udev Kernel Socket.
Nov 26 07:08:19 np0005536586 systemd: Reached target Socket Units.
Nov 26 07:08:19 np0005536586 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 26 07:08:19 np0005536586 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0
Nov 26 07:08:19 np0005536586 systemd: Starting Create List of Static Device Nodes...
Nov 26 07:08:19 np0005536586 systemd: Starting Journal Service...
Nov 26 07:08:19 np0005536586 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 26 07:08:19 np0005536586 systemd: Starting Apply Kernel Variables...
Nov 26 07:08:19 np0005536586 systemd: Starting Create System Users...
Nov 26 07:08:19 np0005536586 systemd: Starting Setup Virtual Console...
Nov 26 07:08:19 np0005536586 systemd: Finished Create List of Static Device Nodes.
Nov 26 07:08:19 np0005536586 systemd: Finished Apply Kernel Variables.
Nov 26 07:08:19 np0005536586 systemd-journald[284]: Journal started
Nov 26 07:08:19 np0005536586 systemd-journald[284]: Runtime Journal (/run/log/journal/0a08c8a3e2a843648947610c4936d879) is 8.0M, max 153.6M, 145.6M free.
Nov 26 07:08:19 np0005536586 systemd: Started Journal Service.
Nov 26 07:08:19 np0005536586 systemd-sysusers[287]: Creating group 'users' with GID 100.
Nov 26 07:08:19 np0005536586 systemd-sysusers[287]: Creating group 'dbus' with GID 81.
Nov 26 07:08:19 np0005536586 systemd-sysusers[287]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 26 07:08:19 np0005536586 systemd[1]: Finished Create System Users.
Nov 26 07:08:19 np0005536586 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 26 07:08:20 np0005536586 systemd[1]: Starting Create Volatile Files and Directories...
Nov 26 07:08:20 np0005536586 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 26 07:08:20 np0005536586 systemd[1]: Finished Create Volatile Files and Directories.
Nov 26 07:08:20 np0005536586 systemd[1]: Finished Setup Virtual Console.
Nov 26 07:08:20 np0005536586 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 26 07:08:20 np0005536586 systemd[1]: Starting dracut cmdline hook...
Nov 26 07:08:20 np0005536586 dracut-cmdline[301]: dracut-9 dracut-057-102.git20250818.el9
Nov 26 07:08:20 np0005536586 dracut-cmdline[301]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 26 07:08:20 np0005536586 systemd[1]: Finished dracut cmdline hook.
Nov 26 07:08:20 np0005536586 systemd[1]: Starting dracut pre-udev hook...
Nov 26 07:08:20 np0005536586 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 26 07:08:20 np0005536586 kernel: device-mapper: uevent: version 1.0.3
Nov 26 07:08:20 np0005536586 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 26 07:08:20 np0005536586 kernel: RPC: Registered named UNIX socket transport module.
Nov 26 07:08:20 np0005536586 kernel: RPC: Registered udp transport module.
Nov 26 07:08:20 np0005536586 kernel: RPC: Registered tcp transport module.
Nov 26 07:08:20 np0005536586 kernel: RPC: Registered tcp-with-tls transport module.
Nov 26 07:08:20 np0005536586 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 26 07:08:20 np0005536586 rpc.statd[417]: Version 2.5.4 starting
Nov 26 07:08:20 np0005536586 rpc.statd[417]: Initializing NSM state
Nov 26 07:08:20 np0005536586 rpc.idmapd[422]: Setting log level to 0
Nov 26 07:08:20 np0005536586 systemd[1]: Finished dracut pre-udev hook.
Nov 26 07:08:20 np0005536586 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 26 07:08:20 np0005536586 systemd-udevd[435]: Using default interface naming scheme 'rhel-9.0'.
Nov 26 07:08:20 np0005536586 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 26 07:08:20 np0005536586 systemd[1]: Starting dracut pre-trigger hook...
Nov 26 07:08:20 np0005536586 systemd[1]: Finished dracut pre-trigger hook.
Nov 26 07:08:20 np0005536586 systemd[1]: Starting Coldplug All udev Devices...
Nov 26 07:08:20 np0005536586 systemd[1]: Created slice Slice /system/modprobe.
Nov 26 07:08:20 np0005536586 systemd[1]: Starting Load Kernel Module configfs...
Nov 26 07:08:20 np0005536586 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 26 07:08:20 np0005536586 systemd[1]: Finished Load Kernel Module configfs.
Nov 26 07:08:20 np0005536586 systemd[1]: Finished Coldplug All udev Devices.
Nov 26 07:08:20 np0005536586 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 26 07:08:20 np0005536586 systemd[1]: Reached target Network.
Nov 26 07:08:20 np0005536586 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 26 07:08:20 np0005536586 systemd[1]: Starting dracut initqueue hook...
Nov 26 07:08:20 np0005536586 kernel: virtio_blk virtio2: 4/0/0 default/read/poll queues
Nov 26 07:08:20 np0005536586 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 26 07:08:20 np0005536586 kernel: vda: vda1
Nov 26 07:08:20 np0005536586 systemd-udevd[456]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 07:08:20 np0005536586 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 26 07:08:20 np0005536586 systemd[1]: Reached target Initrd Root Device.
Nov 26 07:08:20 np0005536586 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Nov 26 07:08:20 np0005536586 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode
Nov 26 07:08:20 np0005536586 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f)
Nov 26 07:08:20 np0005536586 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Nov 26 07:08:20 np0005536586 kernel: scsi host0: ahci
Nov 26 07:08:20 np0005536586 kernel: scsi host1: ahci
Nov 26 07:08:20 np0005536586 kernel: scsi host2: ahci
Nov 26 07:08:20 np0005536586 kernel: scsi host3: ahci
Nov 26 07:08:20 np0005536586 kernel: scsi host4: ahci
Nov 26 07:08:20 np0005536586 kernel: scsi host5: ahci
Nov 26 07:08:20 np0005536586 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22100 irq 49 lpm-pol 0
Nov 26 07:08:20 np0005536586 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22180 irq 49 lpm-pol 0
Nov 26 07:08:20 np0005536586 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22200 irq 49 lpm-pol 0
Nov 26 07:08:20 np0005536586 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22280 irq 49 lpm-pol 0
Nov 26 07:08:20 np0005536586 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22300 irq 49 lpm-pol 0
Nov 26 07:08:20 np0005536586 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22380 irq 49 lpm-pol 0
Nov 26 07:08:20 np0005536586 systemd[1]: Mounting Kernel Configuration File System...
Nov 26 07:08:20 np0005536586 systemd[1]: Mounted Kernel Configuration File System.
Nov 26 07:08:20 np0005536586 systemd[1]: Reached target System Initialization.
Nov 26 07:08:20 np0005536586 systemd[1]: Reached target Basic System.
Nov 26 07:08:21 np0005536586 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Nov 26 07:08:21 np0005536586 kernel: ata4: SATA link down (SStatus 0 SControl 300)
Nov 26 07:08:21 np0005536586 kernel: ata2: SATA link down (SStatus 0 SControl 300)
Nov 26 07:08:21 np0005536586 kernel: ata5: SATA link down (SStatus 0 SControl 300)
Nov 26 07:08:21 np0005536586 kernel: ata6: SATA link down (SStatus 0 SControl 300)
Nov 26 07:08:21 np0005536586 kernel: ata3: SATA link down (SStatus 0 SControl 300)
Nov 26 07:08:21 np0005536586 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 26 07:08:21 np0005536586 kernel: ata1.00: applying bridge limits
Nov 26 07:08:21 np0005536586 kernel: ata1.00: configured for UDMA/100
Nov 26 07:08:21 np0005536586 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 26 07:08:21 np0005536586 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 26 07:08:21 np0005536586 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 26 07:08:21 np0005536586 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 26 07:08:21 np0005536586 systemd[1]: Finished dracut initqueue hook.
Nov 26 07:08:21 np0005536586 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 26 07:08:21 np0005536586 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 26 07:08:21 np0005536586 systemd[1]: Reached target Remote File Systems.
Nov 26 07:08:21 np0005536586 systemd[1]: Starting dracut pre-mount hook...
Nov 26 07:08:21 np0005536586 systemd[1]: Finished dracut pre-mount hook.
Nov 26 07:08:21 np0005536586 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 26 07:08:21 np0005536586 systemd-fsck[526]: /usr/sbin/fsck.xfs: XFS file system.
Nov 26 07:08:21 np0005536586 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 26 07:08:21 np0005536586 systemd[1]: Mounting /sysroot...
Nov 26 07:08:21 np0005536586 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 26 07:08:21 np0005536586 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 26 07:08:21 np0005536586 kernel: XFS (vda1): Ending clean mount
Nov 26 07:08:21 np0005536586 systemd[1]: Mounted /sysroot.
Nov 26 07:08:21 np0005536586 systemd[1]: Reached target Initrd Root File System.
Nov 26 07:08:21 np0005536586 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 26 07:08:21 np0005536586 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 26 07:08:21 np0005536586 systemd[1]: Reached target Initrd File Systems.
Nov 26 07:08:21 np0005536586 systemd[1]: Reached target Initrd Default Target.
Nov 26 07:08:21 np0005536586 systemd[1]: Starting dracut mount hook...
Nov 26 07:08:21 np0005536586 systemd[1]: Finished dracut mount hook.
Nov 26 07:08:21 np0005536586 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 26 07:08:21 np0005536586 rpc.idmapd[422]: exiting on signal 15
Nov 26 07:08:21 np0005536586 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 26 07:08:21 np0005536586 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped target Network.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped target Timer Units.
Nov 26 07:08:21 np0005536586 systemd[1]: dbus.socket: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 26 07:08:21 np0005536586 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped target Initrd Default Target.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped target Basic System.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped target Initrd Root Device.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped target Initrd /usr File System.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped target Path Units.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped target Remote File Systems.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped target Slice Units.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped target Socket Units.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped target System Initialization.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped target Local File Systems.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped target Swaps.
Nov 26 07:08:21 np0005536586 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped dracut mount hook.
Nov 26 07:08:21 np0005536586 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped dracut pre-mount hook.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 26 07:08:21 np0005536586 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 26 07:08:21 np0005536586 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped dracut initqueue hook.
Nov 26 07:08:21 np0005536586 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped Apply Kernel Variables.
Nov 26 07:08:21 np0005536586 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 26 07:08:21 np0005536586 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped Coldplug All udev Devices.
Nov 26 07:08:21 np0005536586 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped dracut pre-trigger hook.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 26 07:08:21 np0005536586 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped Setup Virtual Console.
Nov 26 07:08:21 np0005536586 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 26 07:08:21 np0005536586 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 26 07:08:21 np0005536586 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Closed udev Control Socket.
Nov 26 07:08:21 np0005536586 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Closed udev Kernel Socket.
Nov 26 07:08:21 np0005536586 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped dracut pre-udev hook.
Nov 26 07:08:21 np0005536586 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped dracut cmdline hook.
Nov 26 07:08:21 np0005536586 systemd[1]: Starting Cleanup udev Database...
Nov 26 07:08:21 np0005536586 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 26 07:08:21 np0005536586 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 26 07:08:21 np0005536586 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Stopped Create System Users.
Nov 26 07:08:21 np0005536586 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 26 07:08:21 np0005536586 systemd[1]: Finished Cleanup udev Database.
Nov 26 07:08:21 np0005536586 systemd[1]: Reached target Switch Root.
Nov 26 07:08:21 np0005536586 systemd[1]: Starting Switch Root...
Nov 26 07:08:21 np0005536586 systemd[1]: Switching root.
Nov 26 07:08:21 np0005536586 systemd-journald[284]: Journal stopped
Nov 26 07:08:22 np0005536586 systemd-journald: Received SIGTERM from PID 1 (systemd).
Nov 26 07:08:22 np0005536586 kernel: audit: type=1404 audit(1764158901.858:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 26 07:08:22 np0005536586 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 07:08:22 np0005536586 kernel: SELinux:  policy capability open_perms=1
Nov 26 07:08:22 np0005536586 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 07:08:22 np0005536586 kernel: SELinux:  policy capability always_check_network=0
Nov 26 07:08:22 np0005536586 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 07:08:22 np0005536586 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 07:08:22 np0005536586 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 07:08:22 np0005536586 kernel: audit: type=1403 audit(1764158901.980:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 26 07:08:22 np0005536586 systemd: Successfully loaded SELinux policy in 125.106ms.
Nov 26 07:08:22 np0005536586 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.490ms.
Nov 26 07:08:22 np0005536586 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 26 07:08:22 np0005536586 systemd: Detected virtualization kvm.
Nov 26 07:08:22 np0005536586 systemd: Detected architecture x86-64.
Nov 26 07:08:22 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:08:22 np0005536586 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 26 07:08:22 np0005536586 systemd: Stopped Switch Root.
Nov 26 07:08:22 np0005536586 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 26 07:08:22 np0005536586 systemd: Created slice Slice /system/getty.
Nov 26 07:08:22 np0005536586 systemd: Created slice Slice /system/serial-getty.
Nov 26 07:08:22 np0005536586 systemd: Created slice Slice /system/sshd-keygen.
Nov 26 07:08:22 np0005536586 systemd: Created slice User and Session Slice.
Nov 26 07:08:22 np0005536586 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 26 07:08:22 np0005536586 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 26 07:08:22 np0005536586 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 26 07:08:22 np0005536586 systemd: Reached target Local Encrypted Volumes.
Nov 26 07:08:22 np0005536586 systemd: Stopped target Switch Root.
Nov 26 07:08:22 np0005536586 systemd: Stopped target Initrd File Systems.
Nov 26 07:08:22 np0005536586 systemd: Stopped target Initrd Root File System.
Nov 26 07:08:22 np0005536586 systemd: Reached target Local Integrity Protected Volumes.
Nov 26 07:08:22 np0005536586 systemd: Reached target Path Units.
Nov 26 07:08:22 np0005536586 systemd: Reached target rpc_pipefs.target.
Nov 26 07:08:22 np0005536586 systemd: Reached target Slice Units.
Nov 26 07:08:22 np0005536586 systemd: Reached target Swaps.
Nov 26 07:08:22 np0005536586 systemd: Reached target Local Verity Protected Volumes.
Nov 26 07:08:22 np0005536586 systemd: Listening on RPCbind Server Activation Socket.
Nov 26 07:08:22 np0005536586 systemd: Reached target RPC Port Mapper.
Nov 26 07:08:22 np0005536586 systemd: Listening on Process Core Dump Socket.
Nov 26 07:08:22 np0005536586 systemd: Listening on initctl Compatibility Named Pipe.
Nov 26 07:08:22 np0005536586 systemd: Listening on udev Control Socket.
Nov 26 07:08:22 np0005536586 systemd: Listening on udev Kernel Socket.
Nov 26 07:08:22 np0005536586 systemd: Mounting Huge Pages File System...
Nov 26 07:08:22 np0005536586 systemd: Mounting POSIX Message Queue File System...
Nov 26 07:08:22 np0005536586 systemd: Mounting Kernel Debug File System...
Nov 26 07:08:22 np0005536586 systemd: Mounting Kernel Trace File System...
Nov 26 07:08:22 np0005536586 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 26 07:08:22 np0005536586 systemd: Starting Create List of Static Device Nodes...
Nov 26 07:08:22 np0005536586 systemd: Starting Load Kernel Module configfs...
Nov 26 07:08:22 np0005536586 systemd: Starting Load Kernel Module drm...
Nov 26 07:08:22 np0005536586 systemd: Starting Load Kernel Module efi_pstore...
Nov 26 07:08:22 np0005536586 systemd: Starting Load Kernel Module fuse...
Nov 26 07:08:22 np0005536586 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 26 07:08:22 np0005536586 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 26 07:08:22 np0005536586 systemd: Stopped File System Check on Root Device.
Nov 26 07:08:22 np0005536586 systemd: Stopped Journal Service.
Nov 26 07:08:22 np0005536586 systemd: Starting Journal Service...
Nov 26 07:08:22 np0005536586 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 26 07:08:22 np0005536586 systemd: Starting Generate network units from Kernel command line...
Nov 26 07:08:22 np0005536586 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 26 07:08:22 np0005536586 systemd: Starting Remount Root and Kernel File Systems...
Nov 26 07:08:22 np0005536586 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 26 07:08:22 np0005536586 systemd: Starting Apply Kernel Variables...
Nov 26 07:08:22 np0005536586 systemd: Starting Coldplug All udev Devices...
Nov 26 07:08:22 np0005536586 systemd: Mounted Huge Pages File System.
Nov 26 07:08:22 np0005536586 systemd: Mounted POSIX Message Queue File System.
Nov 26 07:08:22 np0005536586 systemd: Mounted Kernel Debug File System.
Nov 26 07:08:22 np0005536586 systemd: Mounted Kernel Trace File System.
Nov 26 07:08:22 np0005536586 systemd: Finished Create List of Static Device Nodes.
Nov 26 07:08:22 np0005536586 systemd: modprobe@configfs.service: Deactivated successfully.
Nov 26 07:08:22 np0005536586 systemd: Finished Load Kernel Module configfs.
Nov 26 07:08:22 np0005536586 systemd: modprobe@efi_pstore.service: Deactivated successfully.
Nov 26 07:08:22 np0005536586 systemd: Finished Load Kernel Module efi_pstore.
Nov 26 07:08:22 np0005536586 systemd-journald[647]: Journal started
Nov 26 07:08:22 np0005536586 systemd-journald[647]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 26 07:08:22 np0005536586 systemd[1]: Queued start job for default target Multi-User System.
Nov 26 07:08:22 np0005536586 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 26 07:08:22 np0005536586 systemd: Started Journal Service.
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Generate network units from Kernel command line.
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Apply Kernel Variables.
Nov 26 07:08:22 np0005536586 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 26 07:08:22 np0005536586 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 26 07:08:22 np0005536586 kernel: ACPI: bus type drm_connector registered
Nov 26 07:08:22 np0005536586 kernel: fuse: init (API version 7.37)
Nov 26 07:08:22 np0005536586 systemd[1]: Starting Rebuild Hardware Database...
Nov 26 07:08:22 np0005536586 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 26 07:08:22 np0005536586 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 26 07:08:22 np0005536586 systemd[1]: Starting Load/Save OS Random Seed...
Nov 26 07:08:22 np0005536586 systemd-journald[647]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 26 07:08:22 np0005536586 systemd-journald[647]: Received client request to flush runtime journal.
Nov 26 07:08:22 np0005536586 systemd[1]: Starting Create System Users...
Nov 26 07:08:22 np0005536586 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Load Kernel Module drm.
Nov 26 07:08:22 np0005536586 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Load Kernel Module fuse.
Nov 26 07:08:22 np0005536586 systemd[1]: Mounting FUSE Control File System...
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 26 07:08:22 np0005536586 systemd[1]: Mounted FUSE Control File System.
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Load/Save OS Random Seed.
Nov 26 07:08:22 np0005536586 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Create System Users.
Nov 26 07:08:22 np0005536586 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Coldplug All udev Devices.
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 26 07:08:22 np0005536586 systemd[1]: Reached target Preparation for Local File Systems.
Nov 26 07:08:22 np0005536586 systemd[1]: Reached target Local File Systems.
Nov 26 07:08:22 np0005536586 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 26 07:08:22 np0005536586 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 26 07:08:22 np0005536586 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 26 07:08:22 np0005536586 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 26 07:08:22 np0005536586 systemd[1]: Starting Automatic Boot Loader Update...
Nov 26 07:08:22 np0005536586 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 26 07:08:22 np0005536586 systemd[1]: Starting Create Volatile Files and Directories...
Nov 26 07:08:22 np0005536586 bootctl[664]: Couldn't find EFI system partition, skipping.
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Automatic Boot Loader Update.
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Create Volatile Files and Directories.
Nov 26 07:08:22 np0005536586 systemd[1]: Starting Security Auditing Service...
Nov 26 07:08:22 np0005536586 systemd[1]: Starting RPC Bind...
Nov 26 07:08:22 np0005536586 systemd[1]: Starting Rebuild Journal Catalog...
Nov 26 07:08:22 np0005536586 auditd[670]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 26 07:08:22 np0005536586 auditd[670]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Rebuild Journal Catalog.
Nov 26 07:08:22 np0005536586 systemd[1]: Started RPC Bind.
Nov 26 07:08:22 np0005536586 augenrules[675]: /sbin/augenrules: No change
Nov 26 07:08:22 np0005536586 augenrules[690]: No rules
Nov 26 07:08:22 np0005536586 augenrules[690]: enabled 1
Nov 26 07:08:22 np0005536586 augenrules[690]: failure 1
Nov 26 07:08:22 np0005536586 augenrules[690]: pid 670
Nov 26 07:08:22 np0005536586 augenrules[690]: rate_limit 0
Nov 26 07:08:22 np0005536586 augenrules[690]: backlog_limit 8192
Nov 26 07:08:22 np0005536586 augenrules[690]: lost 0
Nov 26 07:08:22 np0005536586 augenrules[690]: backlog 0
Nov 26 07:08:22 np0005536586 augenrules[690]: backlog_wait_time 60000
Nov 26 07:08:22 np0005536586 augenrules[690]: backlog_wait_time_actual 0
Nov 26 07:08:22 np0005536586 augenrules[690]: enabled 1
Nov 26 07:08:22 np0005536586 augenrules[690]: failure 1
Nov 26 07:08:22 np0005536586 augenrules[690]: pid 670
Nov 26 07:08:22 np0005536586 augenrules[690]: rate_limit 0
Nov 26 07:08:22 np0005536586 augenrules[690]: backlog_limit 8192
Nov 26 07:08:22 np0005536586 augenrules[690]: lost 0
Nov 26 07:08:22 np0005536586 augenrules[690]: backlog 0
Nov 26 07:08:22 np0005536586 augenrules[690]: backlog_wait_time 60000
Nov 26 07:08:22 np0005536586 augenrules[690]: backlog_wait_time_actual 0
Nov 26 07:08:22 np0005536586 augenrules[690]: enabled 1
Nov 26 07:08:22 np0005536586 augenrules[690]: failure 1
Nov 26 07:08:22 np0005536586 augenrules[690]: pid 670
Nov 26 07:08:22 np0005536586 augenrules[690]: rate_limit 0
Nov 26 07:08:22 np0005536586 augenrules[690]: backlog_limit 8192
Nov 26 07:08:22 np0005536586 augenrules[690]: lost 0
Nov 26 07:08:22 np0005536586 augenrules[690]: backlog 0
Nov 26 07:08:22 np0005536586 augenrules[690]: backlog_wait_time 60000
Nov 26 07:08:22 np0005536586 augenrules[690]: backlog_wait_time_actual 0
Nov 26 07:08:22 np0005536586 systemd[1]: Started Security Auditing Service.
Nov 26 07:08:22 np0005536586 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Rebuild Hardware Database.
Nov 26 07:08:22 np0005536586 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 26 07:08:22 np0005536586 systemd-udevd[699]: Using default interface naming scheme 'rhel-9.0'.
Nov 26 07:08:22 np0005536586 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 26 07:08:22 np0005536586 systemd[1]: Starting Load Kernel Module configfs...
Nov 26 07:08:22 np0005536586 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Load Kernel Module configfs.
Nov 26 07:08:22 np0005536586 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 26 07:08:22 np0005536586 systemd[1]: Starting Update is Completed...
Nov 26 07:08:22 np0005536586 systemd[1]: Finished Update is Completed.
Nov 26 07:08:22 np0005536586 systemd-udevd[705]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 07:08:22 np0005536586 kernel: lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
Nov 26 07:08:23 np0005536586 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 26 07:08:23 np0005536586 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Nov 26 07:08:23 np0005536586 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 26 07:08:23 np0005536586 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 26 07:08:23 np0005536586 kernel: iTCO_vendor_support: vendor-support=0
Nov 26 07:08:23 np0005536586 kernel: iTCO_wdt iTCO_wdt.1.auto: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
Nov 26 07:08:23 np0005536586 kernel: iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
Nov 26 07:08:23 np0005536586 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0
Nov 26 07:08:23 np0005536586 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console
Nov 26 07:08:23 np0005536586 kernel: Console: switching to colour dummy device 80x25
Nov 26 07:08:23 np0005536586 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 26 07:08:23 np0005536586 kernel: [drm] features: -context_init
Nov 26 07:08:23 np0005536586 kernel: [drm] number of scanouts: 1
Nov 26 07:08:23 np0005536586 kernel: [drm] number of cap sets: 0
Nov 26 07:08:23 np0005536586 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0
Nov 26 07:08:23 np0005536586 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 26 07:08:23 np0005536586 kernel: Console: switching to colour frame buffer device 160x50
Nov 26 07:08:23 np0005536586 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 26 07:08:23 np0005536586 kernel: kvm_amd: TSC scaling supported
Nov 26 07:08:23 np0005536586 kernel: kvm_amd: Nested Virtualization enabled
Nov 26 07:08:23 np0005536586 kernel: kvm_amd: Nested Paging enabled
Nov 26 07:08:23 np0005536586 kernel: kvm_amd: LBR virtualization supported
Nov 26 07:08:23 np0005536586 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Nov 26 07:08:23 np0005536586 kernel: kvm_amd: Virtual GIF supported
Nov 26 07:08:23 np0005536586 systemd[1]: Reached target System Initialization.
Nov 26 07:08:23 np0005536586 systemd[1]: Started dnf makecache --timer.
Nov 26 07:08:23 np0005536586 systemd[1]: Started Daily rotation of log files.
Nov 26 07:08:23 np0005536586 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 26 07:08:23 np0005536586 systemd[1]: Reached target Timer Units.
Nov 26 07:08:23 np0005536586 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 26 07:08:23 np0005536586 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 26 07:08:23 np0005536586 systemd[1]: Reached target Socket Units.
Nov 26 07:08:23 np0005536586 systemd[1]: Starting D-Bus System Message Bus...
Nov 26 07:08:23 np0005536586 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 26 07:08:23 np0005536586 systemd[1]: Started D-Bus System Message Bus.
Nov 26 07:08:23 np0005536586 systemd[1]: Reached target Basic System.
Nov 26 07:08:23 np0005536586 dbus-broker-lau[766]: Ready
Nov 26 07:08:23 np0005536586 systemd[1]: Starting NTP client/server...
Nov 26 07:08:23 np0005536586 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 26 07:08:23 np0005536586 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 26 07:08:23 np0005536586 systemd[1]: Starting IPv4 firewall with iptables...
Nov 26 07:08:23 np0005536586 systemd[1]: Started irqbalance daemon.
Nov 26 07:08:23 np0005536586 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 26 07:08:23 np0005536586 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 07:08:23 np0005536586 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 07:08:23 np0005536586 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 07:08:23 np0005536586 systemd[1]: Reached target sshd-keygen.target.
Nov 26 07:08:23 np0005536586 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 26 07:08:23 np0005536586 systemd[1]: Reached target User and Group Name Lookups.
Nov 26 07:08:23 np0005536586 systemd[1]: Starting User Login Management...
Nov 26 07:08:23 np0005536586 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 26 07:08:23 np0005536586 chronyd[784]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 26 07:08:23 np0005536586 chronyd[784]: Loaded 0 symmetric keys
Nov 26 07:08:23 np0005536586 chronyd[784]: Using right/UTC timezone to obtain leap second data
Nov 26 07:08:23 np0005536586 chronyd[784]: Loaded seccomp filter (level 2)
Nov 26 07:08:23 np0005536586 systemd[1]: Started NTP client/server.
Nov 26 07:08:23 np0005536586 systemd-logind[777]: New seat seat0.
Nov 26 07:08:23 np0005536586 systemd-logind[777]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 26 07:08:23 np0005536586 systemd-logind[777]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 26 07:08:23 np0005536586 systemd[1]: Started User Login Management.
Nov 26 07:08:23 np0005536586 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 26 07:08:23 np0005536586 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 26 07:08:23 np0005536586 iptables.init[771]: iptables: Applying firewall rules: [  OK  ]
Nov 26 07:08:23 np0005536586 systemd[1]: Finished IPv4 firewall with iptables.
Nov 26 07:08:23 np0005536586 cloud-init[794]: Cloud-init v. 24.4-7.el9 running 'init-local' at Wed, 26 Nov 2025 12:08:23 +0000. Up 5.49 seconds.
Nov 26 07:08:24 np0005536586 systemd[1]: run-cloud\x2dinit-tmp-tmp6y_bti08.mount: Deactivated successfully.
Nov 26 07:08:24 np0005536586 systemd[1]: Starting Hostname Service...
Nov 26 07:08:24 np0005536586 systemd[1]: Started Hostname Service.
Nov 26 07:08:24 np0005536586 systemd-hostnamed[808]: Hostname set to <np0005536586> (static)
Nov 26 07:08:24 np0005536586 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 26 07:08:24 np0005536586 systemd[1]: Reached target Preparation for Network.
Nov 26 07:08:24 np0005536586 systemd[1]: Starting Network Manager...
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3134] NetworkManager (version 1.54.1-1.el9) is starting... (boot:031c7117-1661-4641-8ff4-d1885bc6a83e)
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3137] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3222] manager[0x558140d6e080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3250] hostname: hostname: using hostnamed
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3250] hostname: static hostname changed from (none) to "np0005536586"
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3252] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3328] manager[0x558140d6e080]: rfkill: Wi-Fi hardware radio set enabled
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3328] manager[0x558140d6e080]: rfkill: WWAN hardware radio set enabled
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3373] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3373] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3374] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3374] manager: Networking is enabled by state file
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3375] settings: Loaded settings plugin: keyfile (internal)
Nov 26 07:08:24 np0005536586 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3409] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3436] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3455] dhcp: init: Using DHCP client 'internal'
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3458] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3470] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3480] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3487] device (lo): Activation: starting connection 'lo' (14d47366-79b4-47b4-8c24-e57561e2dedc)
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3495] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3499] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3521] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3528] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 26 07:08:24 np0005536586 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3532] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3535] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3538] device (eth0): carrier: link connected
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3541] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3546] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 26 07:08:24 np0005536586 systemd[1]: Started Network Manager.
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3558] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3562] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3562] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3566] manager: NetworkManager state is now CONNECTING
Nov 26 07:08:24 np0005536586 systemd[1]: Reached target Network.
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3569] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3575] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3580] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3585] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Nov 26 07:08:24 np0005536586 systemd[1]: Starting Network Manager Wait Online...
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3611] dhcp4 (eth0): state changed new lease, address=192.168.26.109
Nov 26 07:08:24 np0005536586 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3619] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 26 07:08:24 np0005536586 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3692] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3697] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 26 07:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3706] device (lo): Activation: successful, device activated.
Nov 26 07:08:24 np0005536586 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 26 07:08:24 np0005536586 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 26 07:08:24 np0005536586 systemd[1]: Reached target NFS client services.
Nov 26 07:08:24 np0005536586 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 26 07:08:24 np0005536586 systemd[1]: Reached target Remote File Systems.
Nov 26 07:08:24 np0005536586 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 26 07:08:26 np0005536586 NetworkManager[812]: <info>  [1764158906.0745] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 07:08:27 np0005536586 NetworkManager[812]: <info>  [1764158907.1208] dhcp6 (eth0): state changed new lease, address=2001:db8::f0
Nov 26 07:08:28 np0005536586 NetworkManager[812]: <info>  [1764158908.9552] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 07:08:28 np0005536586 NetworkManager[812]: <info>  [1764158908.9585] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 07:08:28 np0005536586 NetworkManager[812]: <info>  [1764158908.9587] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 07:08:28 np0005536586 NetworkManager[812]: <info>  [1764158908.9591] manager: NetworkManager state is now CONNECTED_SITE
Nov 26 07:08:28 np0005536586 NetworkManager[812]: <info>  [1764158908.9595] device (eth0): Activation: successful, device activated.
Nov 26 07:08:28 np0005536586 NetworkManager[812]: <info>  [1764158908.9600] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 26 07:08:28 np0005536586 NetworkManager[812]: <info>  [1764158908.9603] manager: startup complete
Nov 26 07:08:28 np0005536586 systemd[1]: Finished Network Manager Wait Online.
Nov 26 07:08:28 np0005536586 systemd[1]: Starting Cloud-init: Network Stage...
Nov 26 07:08:29 np0005536586 cloud-init[878]: Cloud-init v. 24.4-7.el9 running 'init' at Wed, 26 Nov 2025 12:08:29 +0000. Up 10.85 seconds.
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: |  eth0  | True |        192.168.26.109        | 255.255.255.0 | global | fa:16:3e:a4:16:5c |
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: |  eth0  | True |       2001:db8::f0/128       |       .       | global | fa:16:3e:a4:16:5c |
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: |  eth0  | True | fe80::f816:3eff:fea4:165c/64 |       .       |  link  | fa:16:3e:a4:16:5c |
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: | Route |   Destination   |   Gateway    |     Genmask     | Interface | Flags |
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: |   0   |     0.0.0.0     | 192.168.26.1 |     0.0.0.0     |    eth0   |   UG  |
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: |   1   | 169.254.169.254 | 192.168.26.2 | 255.255.255.255 |    eth0   |  UGH  |
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: |   2   |   192.168.26.0  |   0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: +++++++++++++++++++++Route IPv6 info++++++++++++++++++++++
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: +-------+--------------+-------------+-----------+-------+
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: | Route | Destination  |   Gateway   | Interface | Flags |
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: +-------+--------------+-------------+-----------+-------+
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: |   1   | 2001:db8::1  |      ::     |    eth0   |   U   |
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: |   2   | 2001:db8::f0 |      ::     |    eth0   |   U   |
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: |   3   |  fe80::/64   |      ::     |    eth0   |   U   |
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: |   4   |     ::/0     | 2001:db8::1 |    eth0   |   UG  |
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: |   6   |    local     |      ::     |    eth0   |   U   |
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: |   7   |    local     |      ::     |    eth0   |   U   |
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: |   8   |  multicast   |      ::     |    eth0   |   U   |
Nov 26 07:08:29 np0005536586 cloud-init[878]: ci-info: +-------+--------------+-------------+-----------+-------+
Nov 26 07:08:29 np0005536586 chronyd[784]: Selected source 50.117.3.95 (2.centos.pool.ntp.org)
Nov 26 07:08:29 np0005536586 chronyd[784]: System clock TAI offset set to 37 seconds
Nov 26 07:08:30 np0005536586 cloud-init[878]: Generating public/private rsa key pair.
Nov 26 07:08:30 np0005536586 cloud-init[878]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 26 07:08:30 np0005536586 cloud-init[878]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 26 07:08:30 np0005536586 cloud-init[878]: The key fingerprint is:
Nov 26 07:08:30 np0005536586 cloud-init[878]: SHA256:oU4bkBjGlPS30WixaiIId9EVSm/KlqgkMBmkj/8DaVY root@np0005536586
Nov 26 07:08:30 np0005536586 cloud-init[878]: The key's randomart image is:
Nov 26 07:08:30 np0005536586 cloud-init[878]: +---[RSA 3072]----+
Nov 26 07:08:30 np0005536586 cloud-init[878]: |o==...o.o.       |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |.+o+ +.B         |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |* o = B =        |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |+= .EB B .       |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |= +o+ X S        |
Nov 26 07:08:30 np0005536586 cloud-init[878]: | ==+ + o         |
Nov 26 07:08:30 np0005536586 cloud-init[878]: | oo.  o          |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |   ..            |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |    ..           |
Nov 26 07:08:30 np0005536586 cloud-init[878]: +----[SHA256]-----+
Nov 26 07:08:30 np0005536586 cloud-init[878]: Generating public/private ecdsa key pair.
Nov 26 07:08:30 np0005536586 cloud-init[878]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 26 07:08:30 np0005536586 cloud-init[878]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 26 07:08:30 np0005536586 cloud-init[878]: The key fingerprint is:
Nov 26 07:08:30 np0005536586 cloud-init[878]: SHA256:d+JWs5Z695iDCleHiLz+sFmjVrvm69RLs8wy0o4DSSI root@np0005536586
Nov 26 07:08:30 np0005536586 cloud-init[878]: The key's randomart image is:
Nov 26 07:08:30 np0005536586 cloud-init[878]: +---[ECDSA 256]---+
Nov 26 07:08:30 np0005536586 cloud-init[878]: |                 |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |                 |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |                 |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |  E . .. . . .   |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |   . o .S + * .  |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |      o  +.* =   |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |       .+oB.B.   |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |       .+%*Bo+oo |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |       .*OXB=.oo.|
Nov 26 07:08:30 np0005536586 cloud-init[878]: +----[SHA256]-----+
Nov 26 07:08:30 np0005536586 cloud-init[878]: Generating public/private ed25519 key pair.
Nov 26 07:08:30 np0005536586 cloud-init[878]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 26 07:08:30 np0005536586 cloud-init[878]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 26 07:08:30 np0005536586 cloud-init[878]: The key fingerprint is:
Nov 26 07:08:30 np0005536586 cloud-init[878]: SHA256:Q2i8rhY/KJ/97GBHBvbjpafCAaO8LMKHbCxO+TEFq04 root@np0005536586
Nov 26 07:08:30 np0005536586 cloud-init[878]: The key's randomart image is:
Nov 26 07:08:30 np0005536586 cloud-init[878]: +--[ED25519 256]--+
Nov 26 07:08:30 np0005536586 cloud-init[878]: |                 |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |     . .         |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |   .  * .        |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |    =o =         |
Nov 26 07:08:30 np0005536586 cloud-init[878]: | . o +. S .      |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |  = o..+ =       |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |+E.+ =+.+ .      |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |B*=.===+ o       |
Nov 26 07:08:30 np0005536586 cloud-init[878]: |=+.=+ .==        |
Nov 26 07:08:30 np0005536586 cloud-init[878]: +----[SHA256]-----+
Nov 26 07:08:30 np0005536586 systemd[1]: Finished Cloud-init: Network Stage.
Nov 26 07:08:30 np0005536586 systemd[1]: Reached target Cloud-config availability.
Nov 26 07:08:30 np0005536586 systemd[1]: Reached target Network is Online.
Nov 26 07:08:30 np0005536586 systemd[1]: Starting Cloud-init: Config Stage...
Nov 26 07:08:30 np0005536586 systemd[1]: Starting Crash recovery kernel arming...
Nov 26 07:08:30 np0005536586 systemd[1]: Starting Notify NFS peers of a restart...
Nov 26 07:08:30 np0005536586 systemd[1]: Starting System Logging Service...
Nov 26 07:08:30 np0005536586 sm-notify[961]: Version 2.5.4 starting
Nov 26 07:08:30 np0005536586 systemd[1]: Starting OpenSSH server daemon...
Nov 26 07:08:30 np0005536586 systemd[1]: Starting Permit User Sessions...
Nov 26 07:08:30 np0005536586 systemd[1]: Started Notify NFS peers of a restart.
Nov 26 07:08:30 np0005536586 systemd[1]: Started OpenSSH server daemon.
Nov 26 07:08:30 np0005536586 systemd[1]: Finished Permit User Sessions.
Nov 26 07:08:30 np0005536586 systemd[1]: Started Command Scheduler.
Nov 26 07:08:30 np0005536586 systemd[1]: Started Getty on tty1.
Nov 26 07:08:30 np0005536586 systemd[1]: Started Serial Getty on ttyS0.
Nov 26 07:08:30 np0005536586 systemd[1]: Reached target Login Prompts.
Nov 26 07:08:30 np0005536586 rsyslogd[962]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="962" x-info="https://www.rsyslog.com"] start
Nov 26 07:08:30 np0005536586 rsyslogd[962]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 26 07:08:30 np0005536586 systemd[1]: Started System Logging Service.
Nov 26 07:08:30 np0005536586 systemd[1]: Reached target Multi-User System.
Nov 26 07:08:30 np0005536586 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 26 07:08:30 np0005536586 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 26 07:08:30 np0005536586 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 26 07:08:30 np0005536586 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 07:08:30 np0005536586 kdumpctl[974]: kdump: No kdump initial ramdisk found.
Nov 26 07:08:30 np0005536586 kdumpctl[974]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 26 07:08:30 np0005536586 chronyd[784]: Selected source 204.9.54.119 (2.centos.pool.ntp.org)
Nov 26 07:08:30 np0005536586 cloud-init[1084]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Wed, 26 Nov 2025 12:08:30 +0000. Up 12.31 seconds.
Nov 26 07:08:30 np0005536586 systemd[1]: Finished Cloud-init: Config Stage.
Nov 26 07:08:30 np0005536586 systemd[1]: Starting Cloud-init: Final Stage...
Nov 26 07:08:30 np0005536586 dracut[1222]: dracut-057-102.git20250818.el9
Nov 26 07:08:31 np0005536586 cloud-init[1240]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Wed, 26 Nov 2025 12:08:31 +0000. Up 12.68 seconds.
Nov 26 07:08:31 np0005536586 dracut[1224]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 26 07:08:31 np0005536586 cloud-init[1256]: #############################################################
Nov 26 07:08:31 np0005536586 cloud-init[1259]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 26 07:08:31 np0005536586 cloud-init[1265]: 256 SHA256:d+JWs5Z695iDCleHiLz+sFmjVrvm69RLs8wy0o4DSSI root@np0005536586 (ECDSA)
Nov 26 07:08:31 np0005536586 cloud-init[1269]: 256 SHA256:Q2i8rhY/KJ/97GBHBvbjpafCAaO8LMKHbCxO+TEFq04 root@np0005536586 (ED25519)
Nov 26 07:08:31 np0005536586 cloud-init[1276]: 3072 SHA256:oU4bkBjGlPS30WixaiIId9EVSm/KlqgkMBmkj/8DaVY root@np0005536586 (RSA)
Nov 26 07:08:31 np0005536586 cloud-init[1277]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 26 07:08:31 np0005536586 cloud-init[1278]: #############################################################
Nov 26 07:08:31 np0005536586 cloud-init[1240]: Cloud-init v. 24.4-7.el9 finished at Wed, 26 Nov 2025 12:08:31 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 12.83 seconds
Nov 26 07:08:31 np0005536586 systemd[1]: Finished Cloud-init: Final Stage.
Nov 26 07:08:31 np0005536586 systemd[1]: Reached target Cloud-init target.
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 26 07:08:31 np0005536586 dracut[1224]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: memstrack is not available
Nov 26 07:08:32 np0005536586 dracut[1224]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 26 07:08:32 np0005536586 dracut[1224]: memstrack is not available
Nov 26 07:08:32 np0005536586 dracut[1224]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 26 07:08:32 np0005536586 dracut[1224]: *** Including module: systemd ***
Nov 26 07:08:32 np0005536586 dracut[1224]: *** Including module: fips ***
Nov 26 07:08:32 np0005536586 dracut[1224]: *** Including module: systemd-initrd ***
Nov 26 07:08:32 np0005536586 dracut[1224]: *** Including module: i18n ***
Nov 26 07:08:33 np0005536586 dracut[1224]: *** Including module: drm ***
Nov 26 07:08:33 np0005536586 irqbalance[772]: Cannot change IRQ 48 affinity: Operation not permitted
Nov 26 07:08:33 np0005536586 irqbalance[772]: IRQ 48 affinity is now unmanaged
Nov 26 07:08:33 np0005536586 irqbalance[772]: Cannot change IRQ 46 affinity: Operation not permitted
Nov 26 07:08:33 np0005536586 irqbalance[772]: IRQ 46 affinity is now unmanaged
Nov 26 07:08:33 np0005536586 dracut[1224]: *** Including module: prefixdevname ***
Nov 26 07:08:33 np0005536586 dracut[1224]: *** Including module: kernel-modules ***
Nov 26 07:08:33 np0005536586 kernel: block vda: the capability attribute has been deprecated.
Nov 26 07:08:34 np0005536586 dracut[1224]: *** Including module: kernel-modules-extra ***
Nov 26 07:08:34 np0005536586 dracut[1224]: *** Including module: qemu ***
Nov 26 07:08:34 np0005536586 dracut[1224]: *** Including module: fstab-sys ***
Nov 26 07:08:34 np0005536586 dracut[1224]: *** Including module: rootfs-block ***
Nov 26 07:08:34 np0005536586 dracut[1224]: *** Including module: terminfo ***
Nov 26 07:08:34 np0005536586 dracut[1224]: *** Including module: udev-rules ***
Nov 26 07:08:34 np0005536586 dracut[1224]: Skipping udev rule: 91-permissions.rules
Nov 26 07:08:34 np0005536586 dracut[1224]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 26 07:08:34 np0005536586 dracut[1224]: *** Including module: virtiofs ***
Nov 26 07:08:34 np0005536586 dracut[1224]: *** Including module: dracut-systemd ***
Nov 26 07:08:34 np0005536586 dracut[1224]: *** Including module: usrmount ***
Nov 26 07:08:34 np0005536586 dracut[1224]: *** Including module: base ***
Nov 26 07:08:34 np0005536586 dracut[1224]: *** Including module: fs-lib ***
Nov 26 07:08:34 np0005536586 dracut[1224]: *** Including module: kdumpbase ***
Nov 26 07:08:35 np0005536586 dracut[1224]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 26 07:08:35 np0005536586 dracut[1224]:  microcode_ctl module: mangling fw_dir
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: configuration "intel" is ignored
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 26 07:08:35 np0005536586 dracut[1224]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 26 07:08:35 np0005536586 dracut[1224]: *** Including module: openssl ***
Nov 26 07:08:35 np0005536586 dracut[1224]: *** Including module: shutdown ***
Nov 26 07:08:35 np0005536586 dracut[1224]: *** Including module: squash ***
Nov 26 07:08:35 np0005536586 dracut[1224]: *** Including modules done ***
Nov 26 07:08:35 np0005536586 dracut[1224]: *** Installing kernel module dependencies ***
Nov 26 07:08:36 np0005536586 dracut[1224]: *** Installing kernel module dependencies done ***
Nov 26 07:08:36 np0005536586 dracut[1224]: *** Resolving executable dependencies ***
Nov 26 07:08:37 np0005536586 dracut[1224]: *** Resolving executable dependencies done ***
Nov 26 07:08:37 np0005536586 dracut[1224]: *** Generating early-microcode cpio image ***
Nov 26 07:08:37 np0005536586 dracut[1224]: *** Store current command line parameters ***
Nov 26 07:08:37 np0005536586 dracut[1224]: Stored kernel commandline:
Nov 26 07:08:37 np0005536586 dracut[1224]: No dracut internal kernel commandline stored in the initramfs
Nov 26 07:08:37 np0005536586 dracut[1224]: *** Install squash loader ***
Nov 26 07:08:38 np0005536586 dracut[1224]: *** Squashing the files inside the initramfs ***
Nov 26 07:08:39 np0005536586 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 07:08:39 np0005536586 dracut[1224]: *** Squashing the files inside the initramfs done ***
Nov 26 07:08:39 np0005536586 dracut[1224]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 26 07:08:39 np0005536586 dracut[1224]: *** Hardlinking files ***
Nov 26 07:08:39 np0005536586 dracut[1224]: *** Hardlinking files done ***
Nov 26 07:08:39 np0005536586 dracut[1224]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 26 07:08:40 np0005536586 kdumpctl[974]: kdump: kexec: loaded kdump kernel
Nov 26 07:08:40 np0005536586 kdumpctl[974]: kdump: Starting kdump: [OK]
Nov 26 07:08:40 np0005536586 systemd[1]: Finished Crash recovery kernel arming.
Nov 26 07:08:40 np0005536586 systemd[1]: Startup finished in 1.378s (kernel) + 2.089s (initrd) + 18.345s (userspace) = 21.813s.
Nov 26 07:08:54 np0005536586 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 26 07:08:56 np0005536586 systemd[1]: Created slice User Slice of UID 1000.
Nov 26 07:08:56 np0005536586 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 26 07:08:56 np0005536586 systemd-logind[777]: New session 1 of user zuul.
Nov 26 07:08:56 np0005536586 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 26 07:08:56 np0005536586 systemd[1]: Starting User Manager for UID 1000...
Nov 26 07:08:56 np0005536586 systemd[4373]: Queued start job for default target Main User Target.
Nov 26 07:08:56 np0005536586 systemd[4373]: Created slice User Application Slice.
Nov 26 07:08:56 np0005536586 systemd[4373]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 26 07:08:56 np0005536586 systemd[4373]: Started Daily Cleanup of User's Temporary Directories.
Nov 26 07:08:56 np0005536586 systemd[4373]: Reached target Paths.
Nov 26 07:08:56 np0005536586 systemd[4373]: Reached target Timers.
Nov 26 07:08:56 np0005536586 systemd[4373]: Starting D-Bus User Message Bus Socket...
Nov 26 07:08:56 np0005536586 systemd[4373]: Starting Create User's Volatile Files and Directories...
Nov 26 07:08:56 np0005536586 systemd[4373]: Listening on D-Bus User Message Bus Socket.
Nov 26 07:08:56 np0005536586 systemd[4373]: Finished Create User's Volatile Files and Directories.
Nov 26 07:08:56 np0005536586 systemd[4373]: Reached target Sockets.
Nov 26 07:08:56 np0005536586 systemd[4373]: Reached target Basic System.
Nov 26 07:08:56 np0005536586 systemd[1]: Started User Manager for UID 1000.
Nov 26 07:08:56 np0005536586 systemd[4373]: Reached target Main User Target.
Nov 26 07:08:56 np0005536586 systemd[4373]: Startup finished in 82ms.
Nov 26 07:08:56 np0005536586 systemd[1]: Started Session 1 of User zuul.
Nov 26 07:08:56 np0005536586 python3[4455]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:08:58 np0005536586 python3[4483]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:09:03 np0005536586 python3[4537]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:09:04 np0005536586 python3[4577]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 26 07:09:05 np0005536586 python3[4603]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/RTVn/2oEGi9ibprRz+YqtX7woECYN+wj7oUywpfYq5KGdLJihwNbBYV9L1X4mAkzE+3kz9cRFBvPOBGmGx4SJoagnPPHf7ezYYCOJ4rvqZj/pPU7S/e1VN3+BJvq7NLAWumkwT5WTT+OxWyTg9hLyt1Pdexi3qsS+MdDiveQ6at0kCI3ictJsXIAnY2la8fjhIEtwXczzm22FLjclKsYMa/PBO+YRMjptc9xCtzoLIGJJk1nZ9JC8PPla0AAMSdqdPPqP68Dyaqr79tb43rKyMN1M+Oo6sNNCg409ijwukDoiKqy8S8gxdPMZV483hzkaX7oAWL3A8bQsaxSLMag/XL375u6KQjfVeNrPTT28v7UsWS2+2+gWg7NWlJuyUBXH0Tn/kjBqzmmUJ934MjXKMsEWjjB5yeJYfRL8OwluBoJswqMCsg2HwWbzakrFZsdgL0kcbGYcZLm0hhwGz3xhqfoRFhUcW1LSOM3DacF3uYbLSOzHb4AkpLXlVJ5nNs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:05 np0005536586 python3[4627]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:09:06 np0005536586 python3[4726]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:09:06 np0005536586 python3[4797]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764158946.1064787-207-99758545323696/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=a8484921f15249798e152441754b1550_id_rsa follow=False checksum=a867e1b24f47bae0626df00812743af234cdb57e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:09:06 np0005536586 python3[4920]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:09:07 np0005536586 python3[4991]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764158946.744187-240-119413442670033/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=a8484921f15249798e152441754b1550_id_rsa.pub follow=False checksum=a73671b2ca98633b83b685ceafe390a2024552ca backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:09:08 np0005536586 python3[5039]: ansible-ping Invoked with data=pong
Nov 26 07:09:08 np0005536586 python3[5063]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:09:10 np0005536586 python3[5117]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 26 07:09:11 np0005536586 python3[5149]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:09:11 np0005536586 python3[5173]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:09:11 np0005536586 python3[5197]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:09:11 np0005536586 python3[5221]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:09:11 np0005536586 python3[5245]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:09:12 np0005536586 python3[5269]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:09:13 np0005536586 irqbalance[772]: Cannot change IRQ 47 affinity: Operation not permitted
Nov 26 07:09:13 np0005536586 irqbalance[772]: IRQ 47 affinity is now unmanaged
Nov 26 07:09:13 np0005536586 python3[5295]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:09:13 np0005536586 python3[5373]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:09:14 np0005536586 python3[5446]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764158953.537438-21-100867771414356/source follow=False _original_basename=mirror_info.sh.j2 checksum=3f92644b791816833989d215b9a84c589a7b8ebd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:09:14 np0005536586 python3[5494]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:14 np0005536586 python3[5518]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:14 np0005536586 python3[5542]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:15 np0005536586 python3[5566]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:15 np0005536586 python3[5590]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:15 np0005536586 python3[5614]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:15 np0005536586 python3[5638]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:16 np0005536586 python3[5662]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:16 np0005536586 python3[5686]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:16 np0005536586 python3[5710]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:16 np0005536586 python3[5734]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:16 np0005536586 python3[5758]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:17 np0005536586 python3[5782]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:17 np0005536586 python3[5806]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:17 np0005536586 python3[5830]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:17 np0005536586 python3[5854]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:17 np0005536586 python3[5878]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:17 np0005536586 python3[5902]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:18 np0005536586 python3[5926]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:18 np0005536586 python3[5950]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:18 np0005536586 python3[5974]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:18 np0005536586 python3[5998]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:18 np0005536586 python3[6022]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:19 np0005536586 python3[6046]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:19 np0005536586 python3[6070]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:19 np0005536586 python3[6094]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:09:22 np0005536586 python3[6120]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 26 07:09:22 np0005536586 systemd[1]: Starting Time & Date Service...
Nov 26 07:09:22 np0005536586 systemd[1]: Started Time & Date Service.
Nov 26 07:09:22 np0005536586 systemd-timedated[6122]: Changed time zone to 'UTC' (UTC).
Nov 26 07:09:22 np0005536586 python3[6151]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:09:23 np0005536586 python3[6227]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:09:23 np0005536586 python3[6298]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764158962.9836133-153-23651387187932/source _original_basename=tmp39qqcbuz follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:09:23 np0005536586 python3[6398]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:09:23 np0005536586 python3[6469]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764158963.5828688-183-261698095444332/source _original_basename=tmptacta_pp follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:09:24 np0005536586 python3[6571]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:09:24 np0005536586 python3[6644]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764158964.3616507-231-277102431289385/source _original_basename=tmp8hlujy0p follow=False checksum=43d6bf474fe3176ca4d99e899bb0d692cb0324b7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:09:25 np0005536586 python3[6692]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:09:25 np0005536586 python3[6718]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:09:25 np0005536586 python3[6798]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:09:26 np0005536586 python3[6871]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764158965.6637838-273-276585713401862/source _original_basename=tmpqspoaqlu follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:09:26 np0005536586 python3[6922]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e08-49e2-e995-ab1c-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:09:27 np0005536586 python3[6950]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e08-49e2-e995-ab1c-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 26 07:09:28 np0005536586 python3[6978]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:09:44 np0005536586 python3[7004]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:09:52 np0005536586 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 26 07:10:07 np0005536586 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Nov 26 07:10:07 np0005536586 kernel: pci 0000:07:00.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 26 07:10:07 np0005536586 kernel: pci 0000:07:00.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 26 07:10:07 np0005536586 kernel: pci 0000:07:00.0: ROM [mem 0x00000000-0x0003ffff pref]
Nov 26 07:10:07 np0005536586 kernel: pci 0000:07:00.0: ROM [mem 0xfe000000-0xfe03ffff pref]: assigned
Nov 26 07:10:07 np0005536586 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfb600000-0xfb603fff 64bit pref]: assigned
Nov 26 07:10:07 np0005536586 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfe040000-0xfe040fff]: assigned
Nov 26 07:10:07 np0005536586 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002)
Nov 26 07:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5047] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 26 07:10:07 np0005536586 systemd-udevd[7007]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 07:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5303] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 07:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5320] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 26 07:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5322] device (eth1): carrier: link connected
Nov 26 07:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5324] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 26 07:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5328] policy: auto-activating connection 'Wired connection 1' (09541ed1-27f0-3dab-920e-bf33aaba73ff)
Nov 26 07:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5331] device (eth1): Activation: starting connection 'Wired connection 1' (09541ed1-27f0-3dab-920e-bf33aaba73ff)
Nov 26 07:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5331] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 07:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5333] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 07:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5336] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 07:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5339] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 26 07:10:07 np0005536586 python3[7034]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e08-49e2-5cb0-f397-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:10:17 np0005536586 python3[7114]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:10:17 np0005536586 python3[7187]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764159017.4439554-111-114963850421969/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=cafd78e264adfbd2a32b952d1e03afef2f90c19f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:10:18 np0005536586 python3[7237]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:10:18 np0005536586 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 26 07:10:18 np0005536586 systemd[1]: Stopped Network Manager Wait Online.
Nov 26 07:10:18 np0005536586 systemd[1]: Stopping Network Manager Wait Online...
Nov 26 07:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5270] caught SIGTERM, shutting down normally.
Nov 26 07:10:18 np0005536586 systemd[1]: Stopping Network Manager...
Nov 26 07:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5275] dhcp4 (eth0): canceled DHCP transaction
Nov 26 07:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5275] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 07:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5275] dhcp4 (eth0): state changed no lease
Nov 26 07:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5276] dhcp6 (eth0): canceled DHCP transaction
Nov 26 07:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5276] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 07:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5276] dhcp6 (eth0): state changed no lease
Nov 26 07:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5278] manager: NetworkManager state is now CONNECTING
Nov 26 07:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5427] dhcp4 (eth1): canceled DHCP transaction
Nov 26 07:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5428] dhcp4 (eth1): state changed no lease
Nov 26 07:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5445] exiting (success)
Nov 26 07:10:18 np0005536586 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 07:10:18 np0005536586 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 07:10:18 np0005536586 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 26 07:10:18 np0005536586 systemd[1]: Stopped Network Manager.
Nov 26 07:10:18 np0005536586 systemd[1]: Starting Network Manager...
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.5919] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:031c7117-1661-4641-8ff4-d1885bc6a83e)
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.5920] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.5958] manager[0x55d338226090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 26 07:10:18 np0005536586 systemd[1]: Starting Hostname Service...
Nov 26 07:10:18 np0005536586 systemd[1]: Started Hostname Service.
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6541] hostname: hostname: using hostnamed
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6542] hostname: static hostname changed from (none) to "np0005536586"
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6545] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6548] manager[0x55d338226090]: rfkill: Wi-Fi hardware radio set enabled
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6549] manager[0x55d338226090]: rfkill: WWAN hardware radio set enabled
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6567] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6568] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6569] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6570] manager: Networking is enabled by state file
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6573] settings: Loaded settings plugin: keyfile (internal)
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6576] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6593] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6599] dhcp: init: Using DHCP client 'internal'
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6601] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6604] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6608] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6613] device (lo): Activation: starting connection 'lo' (14d47366-79b4-47b4-8c24-e57561e2dedc)
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6618] device (eth0): carrier: link connected
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6622] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6625] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6626] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6630] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6636] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6641] device (eth1): carrier: link connected
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6644] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6649] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (09541ed1-27f0-3dab-920e-bf33aaba73ff) (indicated)
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6650] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6653] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6658] device (eth1): Activation: starting connection 'Wired connection 1' (09541ed1-27f0-3dab-920e-bf33aaba73ff)
Nov 26 07:10:18 np0005536586 systemd[1]: Started Network Manager.
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6662] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6666] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6668] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6669] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6670] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6672] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6674] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6676] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6677] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6683] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6685] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6688] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6690] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6699] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6703] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6714] dhcp4 (eth0): state changed new lease, address=192.168.26.109
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6716] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6721] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 26 07:10:18 np0005536586 systemd[1]: Starting Network Manager Wait Online...
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6746] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 26 07:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6751] device (lo): Activation: successful, device activated.
Nov 26 07:10:18 np0005536586 python3[7309]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e08-49e2-5cb0-f397-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:10:19 np0005536586 NetworkManager[7252]: <info>  [1764159019.7568] dhcp6 (eth0): state changed new lease, address=2001:db8::f0
Nov 26 07:10:19 np0005536586 NetworkManager[7252]: <info>  [1764159019.7576] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 26 07:10:19 np0005536586 NetworkManager[7252]: <info>  [1764159019.7605] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 26 07:10:19 np0005536586 NetworkManager[7252]: <info>  [1764159019.7606] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 26 07:10:19 np0005536586 NetworkManager[7252]: <info>  [1764159019.7608] manager: NetworkManager state is now CONNECTED_SITE
Nov 26 07:10:19 np0005536586 NetworkManager[7252]: <info>  [1764159019.7610] device (eth0): Activation: successful, device activated.
Nov 26 07:10:19 np0005536586 NetworkManager[7252]: <info>  [1764159019.7614] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 26 07:10:29 np0005536586 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 07:10:48 np0005536586 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 26 07:10:58 np0005536586 python3[7410]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:10:58 np0005536586 python3[7483]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764159058.4727163-273-80592289281797/source _original_basename=tmpxbznffdp follow=False checksum=421723b73c71618e6142a2656fd71173f072c227 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4000] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 26 07:11:04 np0005536586 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 07:11:04 np0005536586 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4254] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4256] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4261] device (eth1): Activation: successful, device activated.
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4265] manager: startup complete
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4266] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <warn>  [1764159064.4270] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4275] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 26 07:11:04 np0005536586 systemd[1]: Finished Network Manager Wait Online.
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4338] dhcp4 (eth1): canceled DHCP transaction
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4338] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4338] dhcp4 (eth1): state changed no lease
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4347] policy: auto-activating connection 'ci-private-network' (7797382d-d835-51bb-84eb-feed5516994b)
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4350] device (eth1): Activation: starting connection 'ci-private-network' (7797382d-d835-51bb-84eb-feed5516994b)
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4351] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4352] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4356] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4363] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4381] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4382] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 07:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4386] device (eth1): Activation: successful, device activated.
Nov 26 07:11:04 np0005536586 systemd[4373]: Starting Mark boot as successful...
Nov 26 07:11:04 np0005536586 systemd[4373]: Finished Mark boot as successful.
Nov 26 07:11:14 np0005536586 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 07:11:59 np0005536586 systemd-logind[777]: Session 1 logged out. Waiting for processes to exit.
Nov 26 07:14:04 np0005536586 systemd[4373]: Created slice User Background Tasks Slice.
Nov 26 07:14:04 np0005536586 systemd[4373]: Starting Cleanup of User's Temporary Files and Directories...
Nov 26 07:14:04 np0005536586 systemd[4373]: Finished Cleanup of User's Temporary Files and Directories.
Nov 26 07:15:02 np0005536586 systemd-logind[777]: New session 3 of user zuul.
Nov 26 07:15:02 np0005536586 systemd[1]: Started Session 3 of User zuul.
Nov 26 07:15:02 np0005536586 python3[7565]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e08-49e2-c138-c2a2-000000001cc2-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:15:02 np0005536586 python3[7594]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:15:02 np0005536586 python3[7620]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:15:03 np0005536586 python3[7646]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:15:03 np0005536586 python3[7672]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:15:03 np0005536586 python3[7698]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:15:04 np0005536586 python3[7776]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:15:04 np0005536586 python3[7849]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764159304.0124166-464-228299459114709/source _original_basename=tmpm9ojmfgr follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:15:05 np0005536586 python3[7899]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 07:15:05 np0005536586 systemd[1]: Reloading.
Nov 26 07:15:05 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:15:06 np0005536586 python3[7955]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 26 07:15:06 np0005536586 python3[7981]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:15:06 np0005536586 python3[8009]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:15:07 np0005536586 python3[8037]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:15:07 np0005536586 python3[8065]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:15:07 np0005536586 python3[8092]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e08-49e2-c138-c2a2-000000001cc9-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:15:08 np0005536586 python3[8122]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 07:15:10 np0005536586 systemd[1]: session-3.scope: Deactivated successfully.
Nov 26 07:15:10 np0005536586 systemd[1]: session-3.scope: Consumed 2.914s CPU time.
Nov 26 07:15:10 np0005536586 systemd-logind[777]: Session 3 logged out. Waiting for processes to exit.
Nov 26 07:15:10 np0005536586 systemd-logind[777]: Removed session 3.
Nov 26 07:15:12 np0005536586 systemd-logind[777]: New session 4 of user zuul.
Nov 26 07:15:12 np0005536586 systemd[1]: Started Session 4 of User zuul.
Nov 26 07:15:12 np0005536586 python3[8157]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 07:15:30 np0005536586 kernel: SELinux:  Converting 386 SID table entries...
Nov 26 07:15:30 np0005536586 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 07:15:30 np0005536586 kernel: SELinux:  policy capability open_perms=1
Nov 26 07:15:30 np0005536586 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 07:15:30 np0005536586 kernel: SELinux:  policy capability always_check_network=0
Nov 26 07:15:30 np0005536586 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 07:15:30 np0005536586 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 07:15:30 np0005536586 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 07:15:37 np0005536586 kernel: SELinux:  Converting 386 SID table entries...
Nov 26 07:15:37 np0005536586 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 07:15:37 np0005536586 kernel: SELinux:  policy capability open_perms=1
Nov 26 07:15:37 np0005536586 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 07:15:37 np0005536586 kernel: SELinux:  policy capability always_check_network=0
Nov 26 07:15:37 np0005536586 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 07:15:37 np0005536586 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 07:15:37 np0005536586 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 07:15:43 np0005536586 kernel: SELinux:  Converting 386 SID table entries...
Nov 26 07:15:44 np0005536586 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 07:15:44 np0005536586 kernel: SELinux:  policy capability open_perms=1
Nov 26 07:15:44 np0005536586 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 07:15:44 np0005536586 kernel: SELinux:  policy capability always_check_network=0
Nov 26 07:15:44 np0005536586 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 07:15:44 np0005536586 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 07:15:44 np0005536586 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 07:15:44 np0005536586 setsebool[8226]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 26 07:15:44 np0005536586 setsebool[8226]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 26 07:15:53 np0005536586 kernel: SELinux:  Converting 389 SID table entries...
Nov 26 07:15:53 np0005536586 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 07:15:53 np0005536586 kernel: SELinux:  policy capability open_perms=1
Nov 26 07:15:53 np0005536586 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 07:15:53 np0005536586 kernel: SELinux:  policy capability always_check_network=0
Nov 26 07:15:53 np0005536586 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 07:15:53 np0005536586 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 07:15:53 np0005536586 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 07:16:05 np0005536586 dbus-broker-launch[767]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 26 07:16:05 np0005536586 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 07:16:05 np0005536586 systemd[1]: Starting man-db-cache-update.service...
Nov 26 07:16:05 np0005536586 systemd[1]: Reloading.
Nov 26 07:16:05 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:16:05 np0005536586 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 07:16:08 np0005536586 python3[13727]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e08-49e2-c994-0e10-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:16:09 np0005536586 kernel: evm: overlay not supported
Nov 26 07:16:09 np0005536586 systemd[4373]: Starting D-Bus User Message Bus...
Nov 26 07:16:09 np0005536586 dbus-broker-launch[14046]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 26 07:16:09 np0005536586 dbus-broker-launch[14046]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 26 07:16:09 np0005536586 systemd[4373]: Started D-Bus User Message Bus.
Nov 26 07:16:09 np0005536586 dbus-broker-lau[14046]: Ready
Nov 26 07:16:09 np0005536586 systemd[4373]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 26 07:16:09 np0005536586 systemd[4373]: Created slice Slice /user.
Nov 26 07:16:09 np0005536586 systemd[4373]: podman-14027.scope: unit configures an IP firewall, but not running as root.
Nov 26 07:16:09 np0005536586 systemd[4373]: (This warning is only shown for the first unit using IP firewalling.)
Nov 26 07:16:09 np0005536586 systemd[4373]: Started podman-14027.scope.
Nov 26 07:16:09 np0005536586 systemd[4373]: Started podman-pause-b862250b.scope.
Nov 26 07:16:10 np0005536586 python3[14847]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.98:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.98:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:16:10 np0005536586 python3[14847]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 26 07:16:10 np0005536586 systemd[1]: session-4.scope: Deactivated successfully.
Nov 26 07:16:10 np0005536586 systemd[1]: session-4.scope: Consumed 44.117s CPU time.
Nov 26 07:16:10 np0005536586 systemd-logind[777]: Session 4 logged out. Waiting for processes to exit.
Nov 26 07:16:10 np0005536586 systemd-logind[777]: Removed session 4.
Nov 26 07:16:31 np0005536586 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 07:16:31 np0005536586 systemd[1]: Finished man-db-cache-update.service.
Nov 26 07:16:31 np0005536586 systemd[1]: man-db-cache-update.service: Consumed 31.653s CPU time.
Nov 26 07:16:31 np0005536586 systemd[1]: run-r20419a130aa9457785c77a38cdb18796.service: Deactivated successfully.
Nov 26 07:16:38 np0005536586 systemd-logind[777]: New session 5 of user zuul.
Nov 26 07:16:38 np0005536586 systemd[1]: Started Session 5 of User zuul.
Nov 26 07:16:38 np0005536586 python3[29726]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMtucC4FnQax+Pf8Gg4D4fwS2XgcMuHy3SVvy9tgSF3TJREVyHTUZwq0O8++3exJwNg0p9V8ej/sUTptFsOBBK4= zuul@np0005536585#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:16:39 np0005536586 python3[29752]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMtucC4FnQax+Pf8Gg4D4fwS2XgcMuHy3SVvy9tgSF3TJREVyHTUZwq0O8++3exJwNg0p9V8ej/sUTptFsOBBK4= zuul@np0005536585#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:16:39 np0005536586 python3[29778]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005536586 update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 26 07:16:40 np0005536586 python3[29812]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMtucC4FnQax+Pf8Gg4D4fwS2XgcMuHy3SVvy9tgSF3TJREVyHTUZwq0O8++3exJwNg0p9V8ej/sUTptFsOBBK4= zuul@np0005536585#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 07:16:40 np0005536586 python3[29890]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:16:40 np0005536586 python3[29963]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764159400.2022147-137-66963313755005/source _original_basename=tmpsqwbgdw3 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:16:41 np0005536586 python3[30013]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 26 07:16:41 np0005536586 systemd[1]: Starting Hostname Service...
Nov 26 07:16:41 np0005536586 systemd[1]: Started Hostname Service.
Nov 26 07:16:41 np0005536586 systemd-hostnamed[30017]: Changed pretty hostname to 'compute-0'
Nov 26 07:16:41 np0005536586 systemd-hostnamed[30017]: Hostname set to <compute-0> (static)
Nov 26 07:16:41 np0005536586 NetworkManager[7252]: <info>  [1764159401.4347] hostname: static hostname changed from "np0005536586" to "compute-0"
Nov 26 07:16:41 np0005536586 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 07:16:41 np0005536586 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 07:16:41 np0005536586 systemd[1]: session-5.scope: Deactivated successfully.
Nov 26 07:16:41 np0005536586 systemd[1]: session-5.scope: Consumed 1.676s CPU time.
Nov 26 07:16:41 np0005536586 systemd-logind[777]: Session 5 logged out. Waiting for processes to exit.
Nov 26 07:16:41 np0005536586 systemd-logind[777]: Removed session 5.
Nov 26 07:16:51 np0005536586 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 07:17:11 np0005536586 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 26 07:20:26 np0005536586 systemd-logind[777]: New session 6 of user zuul.
Nov 26 07:20:26 np0005536586 systemd[1]: Started Session 6 of User zuul.
Nov 26 07:20:26 np0005536586 python3[30110]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:20:28 np0005536586 python3[30222]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:20:28 np0005536586 python3[30295]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764159627.8925967-34009-59046118125108/source mode=0755 _original_basename=delorean.repo follow=False checksum=cdee622b4b81aba8f448eb3a0d6bf38022474867 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:20:28 np0005536586 python3[30321]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:20:28 np0005536586 python3[30394]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764159627.8925967-34009-59046118125108/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=717d1fa230cffa8c08764d71bd0b4a50d3a90cae backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:20:28 np0005536586 python3[30420]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:20:29 np0005536586 python3[30493]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764159627.8925967-34009-59046118125108/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=8163d09913b97597f86e38eb45c3003e91da783e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:20:29 np0005536586 python3[30519]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:20:29 np0005536586 python3[30592]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764159627.8925967-34009-59046118125108/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=d108d0750ad5b288ccc41bc6534ea307cc51e987 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:20:29 np0005536586 python3[30618]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:20:30 np0005536586 python3[30691]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764159627.8925967-34009-59046118125108/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=20c3917c672c059a872cf09a437f61890d2f89fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:20:30 np0005536586 python3[30717]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:20:30 np0005536586 python3[30790]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764159627.8925967-34009-59046118125108/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=4d14f168e8a0e6930d905faffbcdf4fedd6664d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:20:30 np0005536586 python3[30816]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:20:30 np0005536586 python3[30889]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764159627.8925967-34009-59046118125108/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:20:42 np0005536586 python3[30947]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:23:04 np0005536586 systemd[1]: Starting dnf makecache...
Nov 26 07:23:04 np0005536586 dnf[30949]: Failed determining last makecache time.
Nov 26 07:23:05 np0005536586 dnf[30949]: delorean-openstack-barbican-42b4c41831408a8e323  83 kB/s |  13 kB     00:00
Nov 26 07:23:05 np0005536586 dnf[30949]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 456 kB/s |  65 kB     00:00
Nov 26 07:23:05 np0005536586 dnf[30949]: delorean-openstack-cinder-1c00d6490d88e436f26ef 227 kB/s |  32 kB     00:00
Nov 26 07:23:05 np0005536586 dnf[30949]: delorean-python-stevedore-c4acc5639fd2329372142 957 kB/s | 131 kB     00:00
Nov 26 07:23:05 np0005536586 dnf[30949]: delorean-python-observabilityclient-2f31846d73c 174 kB/s |  25 kB     00:00
Nov 26 07:23:05 np0005536586 dnf[30949]: delorean-os-net-config-bbae2ed8a159b0435a473f38 2.5 MB/s | 356 kB     00:00
Nov 26 07:23:06 np0005536586 dnf[30949]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 312 kB/s |  42 kB     00:00
Nov 26 07:23:06 np0005536586 dnf[30949]: delorean-python-designate-tests-tempest-347fdbc  98 kB/s |  18 kB     00:00
Nov 26 07:23:06 np0005536586 dnf[30949]: delorean-openstack-glance-1fd12c29b339f30fe823e 123 kB/s |  18 kB     00:00
Nov 26 07:23:06 np0005536586 dnf[30949]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 179 kB/s |  29 kB     00:00
Nov 26 07:23:06 np0005536586 dnf[30949]: delorean-openstack-manila-3c01b7181572c95dac462 176 kB/s |  25 kB     00:00
Nov 26 07:23:06 np0005536586 dnf[30949]: delorean-python-whitebox-neutron-tests-tempest- 1.0 MB/s | 154 kB     00:00
Nov 26 07:23:07 np0005536586 dnf[30949]: delorean-openstack-octavia-ba397f07a7331190208c 169 kB/s |  26 kB     00:00
Nov 26 07:23:07 np0005536586 dnf[30949]: delorean-openstack-watcher-c014f81a8647287f6dcc 122 kB/s |  16 kB     00:00
Nov 26 07:23:07 np0005536586 dnf[30949]: delorean-python-tcib-1124124ec06aadbac34f0d340b  53 kB/s | 7.4 kB     00:00
Nov 26 07:23:07 np0005536586 dnf[30949]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 991 kB/s | 144 kB     00:00
Nov 26 07:23:07 np0005536586 dnf[30949]: delorean-openstack-swift-dc98a8463506ac520c469a 103 kB/s |  14 kB     00:00
Nov 26 07:23:07 np0005536586 dnf[30949]: delorean-python-tempestconf-8515371b7cceebd4282 397 kB/s |  53 kB     00:00
Nov 26 07:23:08 np0005536586 dnf[30949]: delorean-openstack-heat-ui-013accbfd179753bc3f0 711 kB/s |  96 kB     00:00
Nov 26 07:23:09 np0005536586 dnf[30949]: CentOS Stream 9 - BaseOS                        5.0 kB/s | 7.3 kB     00:01
Nov 26 07:23:10 np0005536586 dnf[30949]: CentOS Stream 9 - AppStream                      15 kB/s | 7.4 kB     00:00
Nov 26 07:23:10 np0005536586 dnf[30949]: CentOS Stream 9 - CRB                           8.7 kB/s | 7.2 kB     00:00
Nov 26 07:23:11 np0005536586 dnf[30949]: CentOS Stream 9 - Extras packages                19 kB/s | 8.3 kB     00:00
Nov 26 07:23:11 np0005536586 dnf[30949]: dlrn-antelope-testing                           7.3 MB/s | 1.1 MB     00:00
Nov 26 07:23:11 np0005536586 dnf[30949]: dlrn-antelope-build-deps                        3.2 MB/s | 461 kB     00:00
Nov 26 07:23:12 np0005536586 dnf[30949]: centos9-rabbitmq                                2.6 MB/s | 123 kB     00:00
Nov 26 07:23:12 np0005536586 dnf[30949]: centos9-storage                                  32 MB/s | 415 kB     00:00
Nov 26 07:23:12 np0005536586 dnf[30949]: centos9-opstools                                4.4 MB/s |  51 kB     00:00
Nov 26 07:23:12 np0005536586 dnf[30949]: NFV SIG OpenvSwitch                              34 MB/s | 458 kB     00:00
Nov 26 07:23:12 np0005536586 dnf[30949]: repo-setup-centos-appstream                     219 MB/s |  25 MB     00:00
Nov 26 07:23:17 np0005536586 dnf[30949]: repo-setup-centos-baseos                        199 MB/s | 8.8 MB     00:00
Nov 26 07:23:18 np0005536586 dnf[30949]: repo-setup-centos-highavailability               46 MB/s | 744 kB     00:00
Nov 26 07:23:18 np0005536586 dnf[30949]: repo-setup-centos-powertools                    199 MB/s | 7.3 MB     00:00
Nov 26 07:23:18 np0005536586 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 26 07:23:18 np0005536586 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 26 07:23:18 np0005536586 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 26 07:23:18 np0005536586 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 26 07:23:22 np0005536586 dnf[30949]: Extra Packages for Enterprise Linux 9 - x86_64  7.2 MB/s |  20 MB     00:02
Nov 26 07:23:32 np0005536586 dnf[30949]: Metadata cache created.
Nov 26 07:23:32 np0005536586 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 26 07:23:32 np0005536586 systemd[1]: Finished dnf makecache.
Nov 26 07:23:32 np0005536586 systemd[1]: dnf-makecache.service: Consumed 18.563s CPU time.
Nov 26 07:25:42 np0005536586 systemd[1]: session-6.scope: Deactivated successfully.
Nov 26 07:25:42 np0005536586 systemd[1]: session-6.scope: Consumed 3.293s CPU time.
Nov 26 07:25:42 np0005536586 systemd-logind[777]: Session 6 logged out. Waiting for processes to exit.
Nov 26 07:25:42 np0005536586 systemd-logind[777]: Removed session 6.
Nov 26 07:30:04 np0005536586 systemd-logind[777]: New session 7 of user zuul.
Nov 26 07:30:04 np0005536586 systemd[1]: Started Session 7 of User zuul.
Nov 26 07:30:04 np0005536586 python3.9[31208]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:30:05 np0005536586 python3.9[31389]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:30:14 np0005536586 systemd[1]: session-7.scope: Deactivated successfully.
Nov 26 07:30:14 np0005536586 systemd[1]: session-7.scope: Consumed 6.253s CPU time.
Nov 26 07:30:14 np0005536586 systemd-logind[777]: Session 7 logged out. Waiting for processes to exit.
Nov 26 07:30:14 np0005536586 systemd-logind[777]: Removed session 7.
Nov 26 07:30:30 np0005536586 systemd-logind[777]: New session 8 of user zuul.
Nov 26 07:30:30 np0005536586 systemd[1]: Started Session 8 of User zuul.
Nov 26 07:30:30 np0005536586 python3.9[31601]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 26 07:30:31 np0005536586 python3.9[31775]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:30:32 np0005536586 python3.9[31927]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:30:33 np0005536586 python3.9[32080]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:30:33 np0005536586 python3.9[32232]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:30:34 np0005536586 python3.9[32384]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:30:34 np0005536586 python3.9[32507]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160233.7326355-73-138959552607537/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:30:35 np0005536586 python3.9[32659]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:30:35 np0005536586 python3.9[32815]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:30:36 np0005536586 python3.9[32967]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:30:36 np0005536586 python3.9[33117]: ansible-ansible.builtin.service_facts Invoked
Nov 26 07:30:38 np0005536586 python3.9[33370]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:30:39 np0005536586 python3.9[33520]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:30:40 np0005536586 python3.9[33674]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:30:40 np0005536586 python3.9[33832]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:30:41 np0005536586 python3.9[33916]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:31:51 np0005536586 systemd[1]: Reloading.
Nov 26 07:31:51 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:31:51 np0005536586 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 26 07:31:51 np0005536586 systemd[1]: Reloading.
Nov 26 07:31:51 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:31:51 np0005536586 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 26 07:31:51 np0005536586 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 26 07:31:51 np0005536586 systemd[1]: Reloading.
Nov 26 07:31:51 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:31:51 np0005536586 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 26 07:31:51 np0005536586 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Nov 26 07:31:51 np0005536586 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Nov 26 07:31:51 np0005536586 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Nov 26 07:32:35 np0005536586 kernel: SELinux:  Converting 2719 SID table entries...
Nov 26 07:32:35 np0005536586 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 07:32:35 np0005536586 kernel: SELinux:  policy capability open_perms=1
Nov 26 07:32:35 np0005536586 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 07:32:35 np0005536586 kernel: SELinux:  policy capability always_check_network=0
Nov 26 07:32:35 np0005536586 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 07:32:35 np0005536586 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 07:32:35 np0005536586 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 07:32:35 np0005536586 dbus-broker-launch[767]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 26 07:32:35 np0005536586 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 07:32:35 np0005536586 systemd[1]: Starting man-db-cache-update.service...
Nov 26 07:32:35 np0005536586 systemd[1]: Reloading.
Nov 26 07:32:36 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:32:36 np0005536586 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 07:32:36 np0005536586 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 07:32:36 np0005536586 systemd[1]: Finished man-db-cache-update.service.
Nov 26 07:32:36 np0005536586 systemd[1]: run-raf41206f44d547c2a80928a5b4a86684.service: Deactivated successfully.
Nov 26 07:32:36 np0005536586 python3.9[35410]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:32:38 np0005536586 python3.9[35691]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 26 07:32:38 np0005536586 python3.9[35843]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 26 07:32:40 np0005536586 python3.9[35996]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:32:41 np0005536586 python3.9[36148]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 26 07:32:42 np0005536586 python3.9[36300]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:32:42 np0005536586 python3.9[36452]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:32:42 np0005536586 python3.9[36575]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160362.2464335-236-200188565198859/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7c9073e58b305b24b8ebef88eac378fe26a8dfa0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:32:43 np0005536586 python3.9[36727]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:32:44 np0005536586 python3.9[36879]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:32:44 np0005536586 python3.9[37032]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:32:47 np0005536586 python3.9[37184]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 26 07:32:47 np0005536586 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 07:32:48 np0005536586 python3.9[37338]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 07:32:48 np0005536586 python3.9[37496]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 26 07:32:49 np0005536586 python3.9[37656]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 26 07:32:49 np0005536586 python3.9[37809]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 07:32:50 np0005536586 python3.9[37967]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 26 07:32:50 np0005536586 python3.9[38119]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:32:52 np0005536586 python3.9[38272]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:32:52 np0005536586 python3.9[38424]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:32:53 np0005536586 python3.9[38547]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764160372.4117715-355-227205241924536/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:32:53 np0005536586 python3.9[38699]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:32:53 np0005536586 systemd[1]: Starting Load Kernel Modules...
Nov 26 07:32:53 np0005536586 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 26 07:32:53 np0005536586 systemd-modules-load[38703]: Inserted module 'br_netfilter'
Nov 26 07:32:53 np0005536586 kernel: Bridge firewalling registered
Nov 26 07:32:53 np0005536586 systemd[1]: Finished Load Kernel Modules.
Nov 26 07:32:54 np0005536586 python3.9[38859]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:32:54 np0005536586 python3.9[38982]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764160374.0941312-378-139837407102522/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:32:55 np0005536586 python3.9[39134]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:33:00 np0005536586 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Nov 26 07:33:00 np0005536586 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Nov 26 07:33:01 np0005536586 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 07:33:01 np0005536586 systemd[1]: Starting man-db-cache-update.service...
Nov 26 07:33:01 np0005536586 systemd[1]: Reloading.
Nov 26 07:33:01 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:33:01 np0005536586 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 07:33:02 np0005536586 python3.9[40487]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:33:02 np0005536586 python3.9[41576]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 26 07:33:03 np0005536586 python3.9[42424]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:33:03 np0005536586 python3.9[43129]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:33:03 np0005536586 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 07:33:03 np0005536586 systemd[1]: Finished man-db-cache-update.service.
Nov 26 07:33:03 np0005536586 systemd[1]: man-db-cache-update.service: Consumed 3.338s CPU time.
Nov 26 07:33:03 np0005536586 systemd[1]: run-r2f62388b33d54473987a4b284b643bc5.service: Deactivated successfully.
Nov 26 07:33:03 np0005536586 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 26 07:33:04 np0005536586 systemd[1]: Starting Authorization Manager...
Nov 26 07:33:04 np0005536586 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 26 07:33:04 np0005536586 polkitd[43512]: Started polkitd version 0.117
Nov 26 07:33:04 np0005536586 systemd[1]: Started Authorization Manager.
Nov 26 07:33:04 np0005536586 python3.9[43678]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:33:04 np0005536586 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 26 07:33:04 np0005536586 systemd[1]: tuned.service: Deactivated successfully.
Nov 26 07:33:04 np0005536586 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 26 07:33:04 np0005536586 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 26 07:33:04 np0005536586 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 26 07:33:05 np0005536586 python3.9[43840]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 26 07:33:07 np0005536586 python3.9[43992]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:33:07 np0005536586 systemd[1]: Reloading.
Nov 26 07:33:07 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:33:07 np0005536586 python3.9[44180]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:33:07 np0005536586 systemd[1]: Reloading.
Nov 26 07:33:07 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:33:08 np0005536586 python3.9[44369]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:33:08 np0005536586 python3.9[44522]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:33:08 np0005536586 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 26 07:33:09 np0005536586 python3.9[44675]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:33:10 np0005536586 python3.9[44837]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:33:11 np0005536586 python3.9[44990]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:33:11 np0005536586 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 26 07:33:11 np0005536586 systemd[1]: Stopped Apply Kernel Variables.
Nov 26 07:33:11 np0005536586 systemd[1]: Stopping Apply Kernel Variables...
Nov 26 07:33:11 np0005536586 systemd[1]: Starting Apply Kernel Variables...
Nov 26 07:33:11 np0005536586 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 26 07:33:11 np0005536586 systemd[1]: Finished Apply Kernel Variables.
Nov 26 07:33:11 np0005536586 systemd[1]: session-8.scope: Deactivated successfully.
Nov 26 07:33:11 np0005536586 systemd[1]: session-8.scope: Consumed 1min 38.038s CPU time.
Nov 26 07:33:11 np0005536586 systemd-logind[777]: Session 8 logged out. Waiting for processes to exit.
Nov 26 07:33:11 np0005536586 systemd-logind[777]: Removed session 8.
Nov 26 07:33:16 np0005536586 systemd-logind[777]: New session 9 of user zuul.
Nov 26 07:33:16 np0005536586 systemd[1]: Started Session 9 of User zuul.
Nov 26 07:33:17 np0005536586 python3.9[45174]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:33:18 np0005536586 python3.9[45330]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 26 07:33:18 np0005536586 python3.9[45483]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 07:33:19 np0005536586 python3.9[45641]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 26 07:33:19 np0005536586 python3.9[45801]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:33:20 np0005536586 python3.9[45885]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 07:33:23 np0005536586 irqbalance[772]: Cannot change IRQ 44 affinity: Operation not permitted
Nov 26 07:33:23 np0005536586 irqbalance[772]: IRQ 44 affinity is now unmanaged
Nov 26 07:33:26 np0005536586 python3.9[46052]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:33:35 np0005536586 kernel: SELinux:  Converting 2731 SID table entries...
Nov 26 07:33:35 np0005536586 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 07:33:35 np0005536586 kernel: SELinux:  policy capability open_perms=1
Nov 26 07:33:35 np0005536586 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 07:33:35 np0005536586 kernel: SELinux:  policy capability always_check_network=0
Nov 26 07:33:35 np0005536586 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 07:33:35 np0005536586 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 07:33:35 np0005536586 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 07:33:35 np0005536586 dbus-broker-launch[767]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 26 07:33:35 np0005536586 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 26 07:33:36 np0005536586 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 07:33:36 np0005536586 systemd[1]: Starting man-db-cache-update.service...
Nov 26 07:33:36 np0005536586 systemd[1]: Reloading.
Nov 26 07:33:36 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:33:36 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:33:36 np0005536586 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 07:33:36 np0005536586 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 07:33:36 np0005536586 systemd[1]: Finished man-db-cache-update.service.
Nov 26 07:33:36 np0005536586 systemd[1]: run-r7cb0bb0322dc4327bb4df237b950e2b6.service: Deactivated successfully.
Nov 26 07:33:37 np0005536586 python3.9[47149]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 07:33:37 np0005536586 systemd[1]: Reloading.
Nov 26 07:33:37 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:33:37 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:33:37 np0005536586 systemd[1]: Starting Open vSwitch Database Unit...
Nov 26 07:33:37 np0005536586 chown[47191]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 26 07:33:37 np0005536586 ovs-ctl[47196]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 26 07:33:37 np0005536586 ovs-ctl[47196]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 26 07:33:38 np0005536586 ovs-ctl[47196]: Starting ovsdb-server [  OK  ]
Nov 26 07:33:38 np0005536586 ovs-vsctl[47245]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 26 07:33:38 np0005536586 ovs-vsctl[47265]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"1a132c77-5dda-4b90-923d-26a448f3fef6\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 26 07:33:38 np0005536586 ovs-ctl[47196]: Configuring Open vSwitch system IDs [  OK  ]
Nov 26 07:33:38 np0005536586 ovs-ctl[47196]: Enabling remote OVSDB managers [  OK  ]
Nov 26 07:33:38 np0005536586 systemd[1]: Started Open vSwitch Database Unit.
Nov 26 07:33:38 np0005536586 ovs-vsctl[47271]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 26 07:33:38 np0005536586 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 26 07:33:38 np0005536586 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 26 07:33:38 np0005536586 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 26 07:33:38 np0005536586 kernel: openvswitch: Open vSwitch switching datapath
Nov 26 07:33:38 np0005536586 ovs-ctl[47316]: Inserting openvswitch module [  OK  ]
Nov 26 07:33:38 np0005536586 ovs-ctl[47285]: Starting ovs-vswitchd [  OK  ]
Nov 26 07:33:38 np0005536586 ovs-ctl[47285]: Enabling remote OVSDB managers [  OK  ]
Nov 26 07:33:38 np0005536586 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 26 07:33:38 np0005536586 ovs-vsctl[47334]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 26 07:33:38 np0005536586 systemd[1]: Starting Open vSwitch...
Nov 26 07:33:38 np0005536586 systemd[1]: Finished Open vSwitch.
Nov 26 07:33:38 np0005536586 python3.9[47485]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:33:39 np0005536586 python3.9[47637]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 26 07:33:40 np0005536586 kernel: SELinux:  Converting 2745 SID table entries...
Nov 26 07:33:40 np0005536586 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 07:33:40 np0005536586 kernel: SELinux:  policy capability open_perms=1
Nov 26 07:33:40 np0005536586 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 07:33:40 np0005536586 kernel: SELinux:  policy capability always_check_network=0
Nov 26 07:33:40 np0005536586 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 07:33:40 np0005536586 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 07:33:40 np0005536586 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 07:33:41 np0005536586 python3.9[47792]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:33:41 np0005536586 dbus-broker-launch[767]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 26 07:33:41 np0005536586 python3.9[47950]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:33:43 np0005536586 python3.9[48103]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:33:44 np0005536586 python3.9[48390]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 26 07:33:45 np0005536586 python3.9[48540]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:33:45 np0005536586 python3.9[48694]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:33:48 np0005536586 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 07:33:48 np0005536586 systemd[1]: Starting man-db-cache-update.service...
Nov 26 07:33:48 np0005536586 systemd[1]: Reloading.
Nov 26 07:33:48 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:33:48 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:33:48 np0005536586 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 07:33:48 np0005536586 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 07:33:48 np0005536586 systemd[1]: Finished man-db-cache-update.service.
Nov 26 07:33:48 np0005536586 systemd[1]: run-r96993d5388e2445e9478218b347fba95.service: Deactivated successfully.
Nov 26 07:33:49 np0005536586 python3.9[49011]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:33:49 np0005536586 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 26 07:33:49 np0005536586 systemd[1]: Stopped Network Manager Wait Online.
Nov 26 07:33:49 np0005536586 systemd[1]: Stopping Network Manager Wait Online...
Nov 26 07:33:49 np0005536586 systemd[1]: Stopping Network Manager...
Nov 26 07:33:49 np0005536586 NetworkManager[7252]: <info>  [1764160429.3492] caught SIGTERM, shutting down normally.
Nov 26 07:33:49 np0005536586 NetworkManager[7252]: <info>  [1764160429.3500] dhcp4 (eth0): canceled DHCP transaction
Nov 26 07:33:49 np0005536586 NetworkManager[7252]: <info>  [1764160429.3500] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 07:33:49 np0005536586 NetworkManager[7252]: <info>  [1764160429.3500] dhcp4 (eth0): state changed no lease
Nov 26 07:33:49 np0005536586 NetworkManager[7252]: <info>  [1764160429.3501] dhcp6 (eth0): canceled DHCP transaction
Nov 26 07:33:49 np0005536586 NetworkManager[7252]: <info>  [1764160429.3501] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 07:33:49 np0005536586 NetworkManager[7252]: <info>  [1764160429.3501] dhcp6 (eth0): state changed no lease
Nov 26 07:33:49 np0005536586 NetworkManager[7252]: <info>  [1764160429.3502] manager: NetworkManager state is now CONNECTED_SITE
Nov 26 07:33:49 np0005536586 NetworkManager[7252]: <info>  [1764160429.3533] exiting (success)
Nov 26 07:33:49 np0005536586 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 07:33:49 np0005536586 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 07:33:49 np0005536586 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 26 07:33:49 np0005536586 systemd[1]: Stopped Network Manager.
Nov 26 07:33:49 np0005536586 systemd[1]: Starting Network Manager...
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.3990] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:031c7117-1661-4641-8ff4-d1885bc6a83e)
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.3991] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4029] manager[0x561e4534d010]: monitoring kernel firmware directory '/lib/firmware'.
Nov 26 07:33:49 np0005536586 systemd[1]: Starting Hostname Service...
Nov 26 07:33:49 np0005536586 systemd[1]: Started Hostname Service.
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4582] hostname: hostname: using hostnamed
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4583] hostname: static hostname changed from (none) to "compute-0"
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4585] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4588] manager[0x561e4534d010]: rfkill: Wi-Fi hardware radio set enabled
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4588] manager[0x561e4534d010]: rfkill: WWAN hardware radio set enabled
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4601] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4608] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4608] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4608] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4609] manager: Networking is enabled by state file
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4610] settings: Loaded settings plugin: keyfile (internal)
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4613] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4629] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4634] dhcp: init: Using DHCP client 'internal'
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4636] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4639] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4642] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4647] device (lo): Activation: starting connection 'lo' (14d47366-79b4-47b4-8c24-e57561e2dedc)
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4651] device (eth0): carrier: link connected
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4654] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4657] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4658] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4661] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4665] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4669] device (eth1): carrier: link connected
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4671] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4674] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (7797382d-d835-51bb-84eb-feed5516994b) (indicated)
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4674] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4678] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4682] device (eth1): Activation: starting connection 'ci-private-network' (7797382d-d835-51bb-84eb-feed5516994b)
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4685] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 26 07:33:49 np0005536586 systemd[1]: Started Network Manager.
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4689] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4690] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4691] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4692] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4694] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4695] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4697] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4699] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4702] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4704] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4705] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4710] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4713] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4718] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4723] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4724] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4727] device (lo): Activation: successful, device activated.
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4731] dhcp4 (eth0): state changed new lease, address=192.168.26.109
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4735] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4756] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 26 07:33:49 np0005536586 systemd[1]: Starting Network Manager Wait Online...
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4763] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4766] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 26 07:33:49 np0005536586 NetworkManager[49024]: <info>  [1764160429.4770] device (eth1): Activation: successful, device activated.
Nov 26 07:33:49 np0005536586 python3.9[49220]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:33:50 np0005536586 NetworkManager[49024]: <info>  [1764160430.5628] dhcp6 (eth0): state changed new lease, address=2001:db8::f0
Nov 26 07:33:50 np0005536586 NetworkManager[49024]: <info>  [1764160430.5636] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 26 07:33:50 np0005536586 NetworkManager[49024]: <info>  [1764160430.5670] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 26 07:33:50 np0005536586 NetworkManager[49024]: <info>  [1764160430.5671] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 26 07:33:50 np0005536586 NetworkManager[49024]: <info>  [1764160430.5673] manager: NetworkManager state is now CONNECTED_SITE
Nov 26 07:33:50 np0005536586 NetworkManager[49024]: <info>  [1764160430.5675] device (eth0): Activation: successful, device activated.
Nov 26 07:33:50 np0005536586 NetworkManager[49024]: <info>  [1764160430.5678] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 26 07:33:50 np0005536586 NetworkManager[49024]: <info>  [1764160430.5680] manager: startup complete
Nov 26 07:33:50 np0005536586 systemd[1]: Finished Network Manager Wait Online.
Nov 26 07:33:55 np0005536586 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 07:33:55 np0005536586 systemd[1]: Starting man-db-cache-update.service...
Nov 26 07:33:55 np0005536586 systemd[1]: Reloading.
Nov 26 07:33:55 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:33:55 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:33:55 np0005536586 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 07:33:56 np0005536586 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 07:33:56 np0005536586 systemd[1]: Finished man-db-cache-update.service.
Nov 26 07:33:56 np0005536586 systemd[1]: run-r36894f7c132443c6ad0a134a7ff4402e.service: Deactivated successfully.
Nov 26 07:33:57 np0005536586 python3.9[49700]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:33:57 np0005536586 python3.9[49852]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:33:58 np0005536586 python3.9[50006]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:33:58 np0005536586 python3.9[50158]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:33:59 np0005536586 python3.9[50312]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:33:59 np0005536586 python3.9[50464]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:34:00 np0005536586 python3.9[50616]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:34:00 np0005536586 python3.9[50739]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160439.7785628-229-261859560211499/.source _original_basename=.j_7zdh79 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:34:00 np0005536586 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 07:34:01 np0005536586 python3.9[50891]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:34:01 np0005536586 python3.9[51043]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 26 07:34:02 np0005536586 python3.9[51195]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:34:03 np0005536586 python3.9[51622]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 26 07:34:04 np0005536586 ansible-async_wrapper.py[51797]: Invoked with j737676950907 300 /home/zuul/.ansible/tmp/ansible-tmp-1764160443.8974338-295-265983123450112/AnsiballZ_edpm_os_net_config.py _
Nov 26 07:34:04 np0005536586 ansible-async_wrapper.py[51800]: Starting module and watcher
Nov 26 07:34:04 np0005536586 ansible-async_wrapper.py[51800]: Start watching 51801 (300)
Nov 26 07:34:04 np0005536586 ansible-async_wrapper.py[51801]: Start module (51801)
Nov 26 07:34:04 np0005536586 ansible-async_wrapper.py[51797]: Return async_wrapper task started.
Nov 26 07:34:04 np0005536586 python3.9[51802]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 26 07:34:05 np0005536586 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 26 07:34:05 np0005536586 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 26 07:34:05 np0005536586 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 26 07:34:05 np0005536586 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 26 07:34:05 np0005536586 kernel: cfg80211: failed to load regulatory.db
Nov 26 07:34:05 np0005536586 NetworkManager[49024]: <info>  [1764160445.9772] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51803 uid=0 result="success"
Nov 26 07:34:05 np0005536586 NetworkManager[49024]: <info>  [1764160445.9787] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0146] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0147] audit: op="connection-add" uuid="1243cce9-421d-4253-b9ed-b59bd081783d" name="br-ex-br" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0157] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0158] audit: op="connection-add" uuid="a0e24a11-f180-4069-8ae5-827540d8884f" name="br-ex-port" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0167] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0168] audit: op="connection-add" uuid="c2517814-d9f4-44f9-9043-95bf252a8f9d" name="eth1-port" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0176] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0177] audit: op="connection-add" uuid="0cfa67fb-288b-46a2-82e9-107bddfde4c7" name="vlan20-port" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0185] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0185] audit: op="connection-add" uuid="6d37eb1c-302e-4556-b2dc-6307938d5eaf" name="vlan21-port" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0193] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0194] audit: op="connection-add" uuid="6d78c477-4735-44da-8d44-e935b3b614ab" name="vlan22-port" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0202] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0203] audit: op="connection-add" uuid="e667de12-3d14-45b8-9a07-40279d5a0a48" name="vlan23-port" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0217] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.dhcp-timeout,ipv6.may-fail,ipv6.routes,ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0230] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0231] audit: op="connection-add" uuid="cf0fd381-56be-47d9-948c-451180b92cf3" name="br-ex-if" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0247] audit: op="connection-update" uuid="7797382d-d835-51bb-84eb-feed5516994b" name="ci-private-network" args="ovs-interface.type,connection.slave-type,connection.timestamp,connection.master,connection.controller,connection.port-type,ipv4.addresses,ipv4.method,ipv4.routes,ipv4.dns,ipv4.routing-rules,ipv4.never-default,ipv6.addresses,ipv6.method,ipv6.routes,ipv6.dns,ipv6.routing-rules,ipv6.addr-gen-mode,ovs-external-ids.data" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0259] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0260] audit: op="connection-add" uuid="329a5fc7-eb9b-4753-b5f4-22dc70e0e1e9" name="vlan20-if" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0271] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0272] audit: op="connection-add" uuid="26e682c0-91cc-4cf7-b67b-7d83e9fe2579" name="vlan21-if" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0282] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0283] audit: op="connection-add" uuid="2337e498-43ae-4e56-bf6f-ba7a085c31a7" name="vlan22-if" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0294] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0295] audit: op="connection-add" uuid="b221e218-d89b-4dde-9771-ab3fa0888d9f" name="vlan23-if" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0303] audit: op="connection-delete" uuid="09541ed1-27f0-3dab-920e-bf33aaba73ff" name="Wired connection 1" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0312] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0318] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0320] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (1243cce9-421d-4253-b9ed-b59bd081783d)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0321] audit: op="connection-activate" uuid="1243cce9-421d-4253-b9ed-b59bd081783d" name="br-ex-br" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0322] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0326] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0328] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (a0e24a11-f180-4069-8ae5-827540d8884f)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0329] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0333] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0335] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (c2517814-d9f4-44f9-9043-95bf252a8f9d)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0336] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0340] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0342] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (0cfa67fb-288b-46a2-82e9-107bddfde4c7)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0343] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0347] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0349] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (6d37eb1c-302e-4556-b2dc-6307938d5eaf)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0350] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0354] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0356] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (6d78c477-4735-44da-8d44-e935b3b614ab)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0357] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0361] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0363] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (e667de12-3d14-45b8-9a07-40279d5a0a48)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0364] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0365] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0366] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0370] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0373] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0375] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (cf0fd381-56be-47d9-948c-451180b92cf3)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0376] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0378] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0379] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0379] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0380] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0386] device (eth1): disconnecting for new activation request.
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0386] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0388] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0389] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0390] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0391] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0393] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0395] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (329a5fc7-eb9b-4753-b5f4-22dc70e0e1e9)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0396] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0398] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0399] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0399] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0405] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0407] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0409] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (26e682c0-91cc-4cf7-b67b-7d83e9fe2579)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0410] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0411] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0412] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0413] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0414] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0417] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0419] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (2337e498-43ae-4e56-bf6f-ba7a085c31a7)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0420] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0421] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0422] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0423] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0424] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0427] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0429] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (b221e218-d89b-4dde-9771-ab3fa0888d9f)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0430] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0431] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0432] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0433] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0434] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0442] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.method,ipv6.may-fail,ipv6.routes,ipv6.addr-gen-mode,802-3-ethernet.mtu" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0444] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0446] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0447] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0451] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0453] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 kernel: ovs-system: entered promiscuous mode
Nov 26 07:34:06 np0005536586 kernel: Timeout policy base is empty
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0455] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0458] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0459] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0461] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0464] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0465] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0466] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0469] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0471] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0473] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0474] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0477] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0480] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0481] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0482] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0485] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 systemd-udevd[51809]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0487] dhcp4 (eth0): canceled DHCP transaction
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0487] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0487] dhcp4 (eth0): state changed no lease
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0488] dhcp6 (eth0): canceled DHCP transaction
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0488] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0488] dhcp6 (eth0): state changed no lease
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0491] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 26 07:34:06 np0005536586 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0497] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0499] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51803 uid=0 result="fail" reason="Device is not activated"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0528] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0529] dhcp4 (eth0): state changed new lease, address=192.168.26.109
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0545] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0571] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0600] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0614] device (eth1): disconnecting for new activation request.
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0615] audit: op="connection-activate" uuid="7797382d-d835-51bb-84eb-feed5516994b" name="ci-private-network" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0652] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0701] device (eth1): Activation: starting connection 'ci-private-network' (7797382d-d835-51bb-84eb-feed5516994b)
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0704] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0705] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51803 uid=0 result="success"
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0708] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0710] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0713] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0715] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0718] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0719] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0720] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0720] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0721] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0722] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0723] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0727] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0730] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0731] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0733] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0735] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0738] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0739] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0742] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0744] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0746] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0748] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0751] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0762] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0764] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 kernel: br-ex: entered promiscuous mode
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0801] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0803] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0806] device (eth1): Activation: successful, device activated.
Nov 26 07:34:06 np0005536586 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0882] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0889] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 kernel: vlan22: entered promiscuous mode
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0916] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0919] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0922] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0981] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.0988] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.1011] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.1012] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.1015] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 07:34:06 np0005536586 kernel: vlan21: entered promiscuous mode
Nov 26 07:34:06 np0005536586 kernel: vlan23: entered promiscuous mode
Nov 26 07:34:06 np0005536586 systemd-udevd[51808]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.1126] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 26 07:34:06 np0005536586 kernel: vlan20: entered promiscuous mode
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.1145] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.1210] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.1213] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.1213] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.1217] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.1227] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.1247] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.1249] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.1250] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.1253] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.1275] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.1291] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.1292] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 07:34:06 np0005536586 NetworkManager[49024]: <info>  [1764160446.1296] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 07:34:07 np0005536586 NetworkManager[49024]: <info>  [1764160447.2228] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51803 uid=0 result="success"
Nov 26 07:34:07 np0005536586 NetworkManager[49024]: <info>  [1764160447.3379] checkpoint[0x561e45324950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 26 07:34:07 np0005536586 NetworkManager[49024]: <info>  [1764160447.3381] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51803 uid=0 result="success"
Nov 26 07:34:07 np0005536586 NetworkManager[49024]: <info>  [1764160447.4539] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51803 uid=0 result="success"
Nov 26 07:34:07 np0005536586 NetworkManager[49024]: <info>  [1764160447.4549] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51803 uid=0 result="success"
Nov 26 07:34:07 np0005536586 NetworkManager[49024]: <info>  [1764160447.6126] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51803 uid=0 result="success"
Nov 26 07:34:07 np0005536586 NetworkManager[49024]: <info>  [1764160447.7268] checkpoint[0x561e45324a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 26 07:34:07 np0005536586 NetworkManager[49024]: <info>  [1764160447.7272] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51803 uid=0 result="success"
Nov 26 07:34:07 np0005536586 NetworkManager[49024]: <info>  [1764160447.9631] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51803 uid=0 result="success"
Nov 26 07:34:07 np0005536586 NetworkManager[49024]: <info>  [1764160447.9643] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51803 uid=0 result="success"
Nov 26 07:34:08 np0005536586 NetworkManager[49024]: <info>  [1764160448.1257] audit: op="networking-control" arg="global-dns-configuration" pid=51803 uid=0 result="success"
Nov 26 07:34:08 np0005536586 NetworkManager[49024]: <info>  [1764160448.1269] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf)
Nov 26 07:34:08 np0005536586 NetworkManager[49024]: <info>  [1764160448.1274] audit: op="networking-control" arg="global-dns-configuration" pid=51803 uid=0 result="success"
Nov 26 07:34:08 np0005536586 python3.9[52158]: ansible-ansible.legacy.async_status Invoked with jid=j737676950907.51797 mode=status _async_dir=/root/.ansible_async
Nov 26 07:34:08 np0005536586 NetworkManager[49024]: <info>  [1764160448.1336] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51803 uid=0 result="success"
Nov 26 07:34:08 np0005536586 NetworkManager[49024]: <info>  [1764160448.2466] checkpoint[0x561e45324af0]: destroy /org/freedesktop/NetworkManager/Checkpoint/3
Nov 26 07:34:08 np0005536586 NetworkManager[49024]: <info>  [1764160448.2469] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51803 uid=0 result="success"
Nov 26 07:34:08 np0005536586 ansible-async_wrapper.py[51801]: Module complete (51801)
Nov 26 07:34:09 np0005536586 ansible-async_wrapper.py[51800]: Done in kid B.
Nov 26 07:34:11 np0005536586 python3.9[52262]: ansible-ansible.legacy.async_status Invoked with jid=j737676950907.51797 mode=status _async_dir=/root/.ansible_async
Nov 26 07:34:11 np0005536586 python3.9[52362]: ansible-ansible.legacy.async_status Invoked with jid=j737676950907.51797 mode=cleanup _async_dir=/root/.ansible_async
Nov 26 07:34:12 np0005536586 python3.9[52514]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:34:12 np0005536586 python3.9[52637]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160451.972208-322-128240042946258/.source.returncode _original_basename=.u_cj8cq_ follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:34:13 np0005536586 python3.9[52789]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:34:13 np0005536586 python3.9[52912]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160452.8525379-338-52409619940726/.source.cfg _original_basename=.mm_xu9yd follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:34:14 np0005536586 python3.9[53064]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:34:14 np0005536586 systemd[1]: Reloading Network Manager...
Nov 26 07:34:14 np0005536586 NetworkManager[49024]: <info>  [1764160454.1742] audit: op="reload" arg="0" pid=53068 uid=0 result="success"
Nov 26 07:34:14 np0005536586 NetworkManager[49024]: <info>  [1764160454.1747] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 26 07:34:14 np0005536586 NetworkManager[49024]: <info>  [1764160454.1748] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 26 07:34:14 np0005536586 systemd[1]: Reloaded Network Manager.
Nov 26 07:34:14 np0005536586 systemd[1]: session-9.scope: Deactivated successfully.
Nov 26 07:34:14 np0005536586 systemd[1]: session-9.scope: Consumed 35.152s CPU time.
Nov 26 07:34:14 np0005536586 systemd-logind[777]: Session 9 logged out. Waiting for processes to exit.
Nov 26 07:34:14 np0005536586 systemd-logind[777]: Removed session 9.
Nov 26 07:34:19 np0005536586 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 26 07:34:20 np0005536586 systemd-logind[777]: New session 10 of user zuul.
Nov 26 07:34:20 np0005536586 systemd[1]: Started Session 10 of User zuul.
Nov 26 07:34:20 np0005536586 python3.9[53254]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:34:21 np0005536586 python3.9[53409]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:34:22 np0005536586 python3.9[53602]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:34:22 np0005536586 systemd[1]: session-10.scope: Deactivated successfully.
Nov 26 07:34:22 np0005536586 systemd[1]: session-10.scope: Consumed 1.545s CPU time.
Nov 26 07:34:22 np0005536586 systemd-logind[777]: Session 10 logged out. Waiting for processes to exit.
Nov 26 07:34:22 np0005536586 systemd-logind[777]: Removed session 10.
Nov 26 07:34:24 np0005536586 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 07:34:28 np0005536586 systemd-logind[777]: New session 11 of user zuul.
Nov 26 07:34:28 np0005536586 systemd[1]: Started Session 11 of User zuul.
Nov 26 07:34:28 np0005536586 python3.9[53784]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:34:29 np0005536586 python3.9[53938]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:34:30 np0005536586 python3.9[54094]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:34:31 np0005536586 python3.9[54179]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:34:32 np0005536586 python3.9[54332]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:34:33 np0005536586 python3.9[54527]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:34:33 np0005536586 python3.9[54679]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:34:34 np0005536586 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck2820434000-merged.mount: Deactivated successfully.
Nov 26 07:34:34 np0005536586 podman[54680]: 2025-11-26 12:34:34.017771221 +0000 UTC m=+0.029687612 system refresh
Nov 26 07:34:34 np0005536586 python3.9[54841]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:34:35 np0005536586 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 07:34:35 np0005536586 python3.9[54964]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160474.1634064-79-59194933220714/.source.json follow=False _original_basename=podman_network_config.j2 checksum=8661de292338a04cb796b1cfbfa124fb87eda09c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:34:35 np0005536586 python3.9[55116]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:34:35 np0005536586 python3.9[55239]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764160475.2616408-94-241087117069501/.source.conf follow=False _original_basename=registries.conf.j2 checksum=74ad3fdf1c9c551f4957cab58c04bb2f8b0dc3e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:34:36 np0005536586 python3.9[55391]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:34:37 np0005536586 python3.9[55543]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:34:37 np0005536586 python3.9[55696]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:34:37 np0005536586 python3.9[55848]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:34:38 np0005536586 python3.9[56000]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:34:40 np0005536586 python3.9[56153]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:34:40 np0005536586 python3.9[56307]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:34:41 np0005536586 python3.9[56459]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:34:41 np0005536586 python3.9[56611]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:34:42 np0005536586 python3.9[56764]: ansible-service_facts Invoked
Nov 26 07:34:42 np0005536586 network[56781]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 07:34:42 np0005536586 network[56782]: 'network-scripts' will be removed from distribution in near future.
Nov 26 07:34:42 np0005536586 network[56783]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 07:34:45 np0005536586 python3.9[57235]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:34:47 np0005536586 python3.9[57388]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 26 07:34:48 np0005536586 python3.9[57540]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:34:48 np0005536586 python3.9[57665]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160487.709519-238-182057613396427/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:34:49 np0005536586 python3.9[57819]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:34:49 np0005536586 python3.9[57944]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160488.6718094-253-15708298012318/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:34:50 np0005536586 python3.9[58098]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:34:51 np0005536586 python3.9[58252]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:34:51 np0005536586 python3.9[58336]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:34:52 np0005536586 python3.9[58490]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:34:53 np0005536586 python3.9[58574]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:34:53 np0005536586 chronyd[784]: chronyd exiting
Nov 26 07:34:53 np0005536586 systemd[1]: Stopping NTP client/server...
Nov 26 07:34:53 np0005536586 systemd[1]: chronyd.service: Deactivated successfully.
Nov 26 07:34:53 np0005536586 systemd[1]: Stopped NTP client/server.
Nov 26 07:34:53 np0005536586 systemd[1]: Starting NTP client/server...
Nov 26 07:34:53 np0005536586 chronyd[58583]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 26 07:34:53 np0005536586 chronyd[58583]: Frequency -9.271 +/- 0.316 ppm read from /var/lib/chrony/drift
Nov 26 07:34:53 np0005536586 chronyd[58583]: Loaded seccomp filter (level 2)
Nov 26 07:34:53 np0005536586 systemd[1]: Started NTP client/server.
Nov 26 07:34:53 np0005536586 systemd[1]: session-11.scope: Deactivated successfully.
Nov 26 07:34:53 np0005536586 systemd[1]: session-11.scope: Consumed 17.807s CPU time.
Nov 26 07:34:53 np0005536586 systemd-logind[777]: Session 11 logged out. Waiting for processes to exit.
Nov 26 07:34:53 np0005536586 systemd-logind[777]: Removed session 11.
Nov 26 07:34:58 np0005536586 systemd-logind[777]: New session 12 of user zuul.
Nov 26 07:34:58 np0005536586 systemd[1]: Started Session 12 of User zuul.
Nov 26 07:34:58 np0005536586 python3.9[58764]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:34:59 np0005536586 python3.9[58916]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:00 np0005536586 python3.9[59039]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160499.0927436-34-219941388209047/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:00 np0005536586 systemd[1]: session-12.scope: Deactivated successfully.
Nov 26 07:35:00 np0005536586 systemd[1]: session-12.scope: Consumed 1.109s CPU time.
Nov 26 07:35:00 np0005536586 systemd-logind[777]: Session 12 logged out. Waiting for processes to exit.
Nov 26 07:35:00 np0005536586 systemd-logind[777]: Removed session 12.
Nov 26 07:35:05 np0005536586 systemd-logind[777]: New session 13 of user zuul.
Nov 26 07:35:05 np0005536586 systemd[1]: Started Session 13 of User zuul.
Nov 26 07:35:06 np0005536586 python3.9[59217]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:35:06 np0005536586 python3.9[59373]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:07 np0005536586 python3.9[59548]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:07 np0005536586 python3.9[59671]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764160506.9851377-41-59377645806977/.source.json _original_basename=.09hf_c3x follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:08 np0005536586 python3.9[59823]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:08 np0005536586 python3.9[59946]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160508.1581402-64-137774190168762/.source _original_basename=.slzi8078 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:09 np0005536586 python3.9[60098]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:35:09 np0005536586 python3.9[60250]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:10 np0005536586 python3.9[60373]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764160509.5094385-88-234829796847992/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:35:10 np0005536586 python3.9[60525]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:11 np0005536586 python3.9[60648]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764160510.328495-88-14132914980075/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:35:11 np0005536586 python3.9[60800]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:11 np0005536586 python3.9[60952]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:12 np0005536586 python3.9[61075]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160511.5941892-125-94269362455226/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:12 np0005536586 python3.9[61227]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:13 np0005536586 python3.9[61350]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160512.4004223-140-59433226095829/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:13 np0005536586 python3.9[61502]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:35:13 np0005536586 systemd[1]: Reloading.
Nov 26 07:35:13 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:35:13 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:35:14 np0005536586 systemd[1]: Reloading.
Nov 26 07:35:14 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:35:14 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:35:14 np0005536586 systemd[1]: Starting EDPM Container Shutdown...
Nov 26 07:35:14 np0005536586 systemd[1]: Finished EDPM Container Shutdown.
Nov 26 07:35:14 np0005536586 python3.9[61727]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:15 np0005536586 python3.9[61850]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160514.3209085-163-108951258951318/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:15 np0005536586 python3.9[62002]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:15 np0005536586 python3.9[62125]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160515.1699994-178-136309886822281/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:16 np0005536586 python3.9[62277]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:35:16 np0005536586 systemd[1]: Reloading.
Nov 26 07:35:16 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:35:16 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:35:16 np0005536586 systemd[1]: Reloading.
Nov 26 07:35:16 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:35:16 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:35:16 np0005536586 systemd[1]: Starting Create netns directory...
Nov 26 07:35:16 np0005536586 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 07:35:16 np0005536586 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 07:35:16 np0005536586 systemd[1]: Finished Create netns directory.
Nov 26 07:35:17 np0005536586 python3.9[62503]: ansible-ansible.builtin.service_facts Invoked
Nov 26 07:35:17 np0005536586 network[62520]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 07:35:17 np0005536586 network[62521]: 'network-scripts' will be removed from distribution in near future.
Nov 26 07:35:17 np0005536586 network[62522]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 07:35:19 np0005536586 python3.9[62784]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:35:19 np0005536586 systemd[1]: Reloading.
Nov 26 07:35:19 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:35:19 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:35:19 np0005536586 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 26 07:35:20 np0005536586 iptables.init[62824]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 26 07:35:20 np0005536586 iptables.init[62824]: iptables: Flushing firewall rules: [  OK  ]
Nov 26 07:35:20 np0005536586 systemd[1]: iptables.service: Deactivated successfully.
Nov 26 07:35:20 np0005536586 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 26 07:35:20 np0005536586 python3.9[63020]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:35:21 np0005536586 python3.9[63174]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:35:21 np0005536586 systemd[1]: Reloading.
Nov 26 07:35:21 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:35:21 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:35:21 np0005536586 systemd[1]: Starting Netfilter Tables...
Nov 26 07:35:21 np0005536586 systemd[1]: Finished Netfilter Tables.
Nov 26 07:35:22 np0005536586 python3.9[63365]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:35:22 np0005536586 python3.9[63518]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:23 np0005536586 python3.9[63643]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160522.306581-247-209223064560748/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:23 np0005536586 python3.9[63796]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:35:23 np0005536586 systemd[1]: Reloading OpenSSH server daemon...
Nov 26 07:35:23 np0005536586 systemd[1]: Reloaded OpenSSH server daemon.
Nov 26 07:35:24 np0005536586 python3.9[63952]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:24 np0005536586 python3.9[64104]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:24 np0005536586 python3.9[64227]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160524.1393857-278-89339686471090/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:25 np0005536586 python3.9[64379]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 26 07:35:25 np0005536586 systemd[1]: Starting Time & Date Service...
Nov 26 07:35:25 np0005536586 systemd[1]: Started Time & Date Service.
Nov 26 07:35:26 np0005536586 python3.9[64535]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:26 np0005536586 python3.9[64687]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:26 np0005536586 python3.9[64810]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160526.1321833-313-248952859833181/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:27 np0005536586 python3.9[64962]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:27 np0005536586 python3.9[65085]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160527.007756-328-202451966565404/.source.yaml _original_basename=.7ug6e2qj follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:28 np0005536586 python3.9[65237]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:28 np0005536586 python3.9[65360]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160527.8461552-343-109314030339093/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:29 np0005536586 python3.9[65512]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:35:29 np0005536586 python3.9[65665]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:35:30 np0005536586 python3[65818]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 07:35:30 np0005536586 python3.9[65970]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:30 np0005536586 python3.9[66093]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160530.2557251-382-10480013966688/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:31 np0005536586 python3.9[66245]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:31 np0005536586 python3.9[66368]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160531.1011791-397-262642831478259/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:32 np0005536586 python3.9[66520]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:32 np0005536586 python3.9[66643]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160531.9427478-412-230396121106319/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:33 np0005536586 python3.9[66795]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:33 np0005536586 python3.9[66918]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160532.7527468-427-263062678697737/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:33 np0005536586 python3.9[67070]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:35:34 np0005536586 python3.9[67193]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160533.5846543-442-36517671802479/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:34 np0005536586 python3.9[67345]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:35 np0005536586 python3.9[67497]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:35:35 np0005536586 python3.9[67656]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:36 np0005536586 python3.9[67809]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:36 np0005536586 python3.9[67961]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:37 np0005536586 python3.9[68113]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 26 07:35:37 np0005536586 python3.9[68266]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 26 07:35:38 np0005536586 systemd[1]: session-13.scope: Deactivated successfully.
Nov 26 07:35:38 np0005536586 systemd[1]: session-13.scope: Consumed 24.477s CPU time.
Nov 26 07:35:38 np0005536586 systemd-logind[777]: Session 13 logged out. Waiting for processes to exit.
Nov 26 07:35:38 np0005536586 systemd-logind[777]: Removed session 13.
Nov 26 07:35:43 np0005536586 systemd-logind[777]: New session 14 of user zuul.
Nov 26 07:35:43 np0005536586 systemd[1]: Started Session 14 of User zuul.
Nov 26 07:35:44 np0005536586 python3.9[68447]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 26 07:35:44 np0005536586 python3.9[68599]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:35:45 np0005536586 python3.9[68751]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:35:46 np0005536586 python3.9[68903]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZE1dpxvL8OPz/VjvFsUTPfsDH6vQml5mdj02SrlFJXfQ252JoKh5fIbIe5jq+eMTBsdiCv9Uyd8xyCUarLeNlJLXFWeql+5MwT2PuY4qrfay7YgFarsvqVEneCieDB/KjZaqMenEf/yZJjvCZifypNg9Of1e8QgrIOrGdP8zeyVeSR6g7d477abOVM7jqxl1dgu5rM+rlTW4DHASE9s/qzG6qu1p1HB8ZEiKsXEtoLhomhrwcTSk94ELWY62pIn8cyapkDsX3TnUoIzQZE8wHuKD+UpY8fWfvFoKo+fdR3UnZmegzF7lylv9XeU/lSEgeDN/LggErCBVNDLBaUG54mPUhEXh3MLVnzgSeCs+DGrchncrg0mgqgKPeAPoZrH+WzFuvKCCsGBjrX8QhxkOy2Q43UXW4uIZlhuzPSsZEnqjd+oz98yWJanGeEkfPCs4nqf6Btd135JYpY2UQoryGnawaWQx/nbU9rePlzY7IbAuDaivVwT3RTKUEmoXfmis=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKuDB4s6WXjGK+4hbQXMcwUNsMga+M2cTnBcJkimQdRS#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK2PGuuGeSfke7nCSgI56m6cuyn45RHczvKouRcqVMRuIWRuDTGV0zknjmAVTtZjpkmBwAytv1rMLkBGlVHtizM=#012 create=True mode=0644 path=/tmp/ansible.yrk3lcpn state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:46 np0005536586 python3.9[69055]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.yrk3lcpn' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:35:47 np0005536586 python3.9[69209]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.yrk3lcpn state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:47 np0005536586 systemd[1]: session-14.scope: Deactivated successfully.
Nov 26 07:35:47 np0005536586 systemd[1]: session-14.scope: Consumed 2.351s CPU time.
Nov 26 07:35:47 np0005536586 systemd-logind[777]: Session 14 logged out. Waiting for processes to exit.
Nov 26 07:35:47 np0005536586 systemd-logind[777]: Removed session 14.
Nov 26 07:35:52 np0005536586 systemd-logind[777]: New session 15 of user zuul.
Nov 26 07:35:52 np0005536586 systemd[1]: Started Session 15 of User zuul.
Nov 26 07:35:53 np0005536586 python3.9[69387]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:35:54 np0005536586 python3.9[69543]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 26 07:35:54 np0005536586 python3.9[69697]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:35:55 np0005536586 python3.9[69850]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:35:55 np0005536586 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 26 07:35:56 np0005536586 python3.9[70005]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:35:56 np0005536586 python3.9[70159]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:35:57 np0005536586 python3.9[70314]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:35:57 np0005536586 systemd[1]: session-15.scope: Deactivated successfully.
Nov 26 07:35:57 np0005536586 systemd[1]: session-15.scope: Consumed 3.137s CPU time.
Nov 26 07:35:57 np0005536586 systemd-logind[777]: Session 15 logged out. Waiting for processes to exit.
Nov 26 07:35:57 np0005536586 systemd-logind[777]: Removed session 15.
Nov 26 07:36:01 np0005536586 systemd-logind[777]: New session 16 of user zuul.
Nov 26 07:36:01 np0005536586 systemd[1]: Started Session 16 of User zuul.
Nov 26 07:36:02 np0005536586 python3.9[70492]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:36:03 np0005536586 python3.9[70648]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:36:03 np0005536586 python3.9[70732]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 07:36:05 np0005536586 python3.9[70883]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:36:06 np0005536586 python3.9[71034]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 07:36:06 np0005536586 python3.9[71184]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:36:07 np0005536586 python3.9[71334]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:36:07 np0005536586 systemd[1]: session-16.scope: Deactivated successfully.
Nov 26 07:36:07 np0005536586 systemd[1]: session-16.scope: Consumed 4.185s CPU time.
Nov 26 07:36:07 np0005536586 systemd-logind[777]: Session 16 logged out. Waiting for processes to exit.
Nov 26 07:36:07 np0005536586 systemd-logind[777]: Removed session 16.
Nov 26 07:36:14 np0005536586 systemd-logind[777]: New session 17 of user zuul.
Nov 26 07:36:14 np0005536586 systemd[1]: Started Session 17 of User zuul.
Nov 26 07:36:18 np0005536586 python3[72100]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:36:19 np0005536586 python3[72191]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 07:36:20 np0005536586 python3[72218]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 07:36:20 np0005536586 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 07:36:20 np0005536586 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 07:36:20 np0005536586 python3[72245]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:36:20 np0005536586 kernel: loop: module loaded
Nov 26 07:36:20 np0005536586 kernel: loop3: detected capacity change from 0 to 41943040
Nov 26 07:36:21 np0005536586 python3[72279]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:36:21 np0005536586 lvm[72282]: PV /dev/loop3 not used.
Nov 26 07:36:21 np0005536586 lvm[72291]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 07:36:21 np0005536586 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 26 07:36:21 np0005536586 lvm[72293]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 26 07:36:21 np0005536586 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 26 07:36:21 np0005536586 python3[72371]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:36:21 np0005536586 python3[72444]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764160581.3948598-36670-42174178206932/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:36:22 np0005536586 python3[72494]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:36:22 np0005536586 systemd[1]: Reloading.
Nov 26 07:36:22 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:36:22 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:36:22 np0005536586 systemd[1]: Starting Ceph OSD losetup...
Nov 26 07:36:22 np0005536586 bash[72533]: /dev/loop3: [64513]:4194933 (/var/lib/ceph-osd-0.img)
Nov 26 07:36:22 np0005536586 systemd[1]: Finished Ceph OSD losetup.
Nov 26 07:36:22 np0005536586 lvm[72534]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 07:36:22 np0005536586 lvm[72534]: VG ceph_vg0 finished
Nov 26 07:36:22 np0005536586 python3[72560]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 07:36:24 np0005536586 python3[72587]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 07:36:24 np0005536586 python3[72613]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:36:24 np0005536586 kernel: loop4: detected capacity change from 0 to 41943040
Nov 26 07:36:24 np0005536586 python3[72645]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:36:24 np0005536586 lvm[72648]: PV /dev/loop4 not used.
Nov 26 07:36:24 np0005536586 lvm[72658]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 07:36:24 np0005536586 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 26 07:36:24 np0005536586 lvm[72660]:  1 logical volume(s) in volume group "ceph_vg1" now active
Nov 26 07:36:24 np0005536586 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 26 07:36:25 np0005536586 python3[72738]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:36:25 np0005536586 python3[72811]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764160584.8523064-36697-53067327336241/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:36:25 np0005536586 python3[72861]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:36:25 np0005536586 systemd[1]: Reloading.
Nov 26 07:36:25 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:36:25 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:36:25 np0005536586 systemd[1]: Starting Ceph OSD losetup...
Nov 26 07:36:25 np0005536586 bash[72901]: /dev/loop4: [64513]:4194935 (/var/lib/ceph-osd-1.img)
Nov 26 07:36:25 np0005536586 systemd[1]: Finished Ceph OSD losetup.
Nov 26 07:36:25 np0005536586 lvm[72902]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 07:36:25 np0005536586 lvm[72902]: VG ceph_vg1 finished
Nov 26 07:36:26 np0005536586 python3[72928]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 07:36:27 np0005536586 python3[72955]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 07:36:27 np0005536586 python3[72981]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:36:27 np0005536586 kernel: loop5: detected capacity change from 0 to 41943040
Nov 26 07:36:27 np0005536586 python3[73013]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:36:27 np0005536586 lvm[73016]: PV /dev/loop5 not used.
Nov 26 07:36:27 np0005536586 lvm[73026]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 07:36:27 np0005536586 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 26 07:36:28 np0005536586 lvm[73028]:  1 logical volume(s) in volume group "ceph_vg2" now active
Nov 26 07:36:28 np0005536586 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 26 07:36:28 np0005536586 python3[73106]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:36:28 np0005536586 python3[73179]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764160588.080042-36724-11163864992032/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:36:28 np0005536586 python3[73229]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:36:28 np0005536586 systemd[1]: Reloading.
Nov 26 07:36:28 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:36:28 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:36:29 np0005536586 systemd[1]: Starting Ceph OSD losetup...
Nov 26 07:36:29 np0005536586 bash[73268]: /dev/loop5: [64513]:4194939 (/var/lib/ceph-osd-2.img)
Nov 26 07:36:29 np0005536586 systemd[1]: Finished Ceph OSD losetup.
Nov 26 07:36:29 np0005536586 lvm[73269]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 07:36:29 np0005536586 lvm[73269]: VG ceph_vg2 finished
Nov 26 07:36:30 np0005536586 python3[73293]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:36:32 np0005536586 python3[73386]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 07:36:33 np0005536586 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 07:36:33 np0005536586 systemd[1]: Starting man-db-cache-update.service...
Nov 26 07:36:33 np0005536586 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 07:36:33 np0005536586 systemd[1]: Finished man-db-cache-update.service.
Nov 26 07:36:33 np0005536586 systemd[1]: run-rd4d1e30ccdbe457db6dbf1d17ce5c515.service: Deactivated successfully.
Nov 26 07:36:33 np0005536586 python3[73497]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 07:36:34 np0005536586 python3[73525]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:36:34 np0005536586 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 07:36:34 np0005536586 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 07:36:34 np0005536586 python3[73583]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:36:35 np0005536586 python3[73609]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:36:35 np0005536586 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 07:36:35 np0005536586 python3[73687]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:36:35 np0005536586 python3[73760]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764160595.3499851-36871-105810088639733/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:36:36 np0005536586 python3[73862]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:36:36 np0005536586 python3[73935]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764160596.118687-36889-259583515668316/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:36:36 np0005536586 python3[73985]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 07:36:37 np0005536586 python3[74013]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 07:36:37 np0005536586 python3[74041]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 07:36:37 np0005536586 python3[74069]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:36:37 np0005536586 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 07:36:37 np0005536586 systemd[1]: Created slice User Slice of UID 42477.
Nov 26 07:36:37 np0005536586 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 26 07:36:37 np0005536586 systemd-logind[777]: New session 18 of user ceph-admin.
Nov 26 07:36:37 np0005536586 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 26 07:36:37 np0005536586 systemd[1]: Starting User Manager for UID 42477...
Nov 26 07:36:37 np0005536586 systemd[74086]: Queued start job for default target Main User Target.
Nov 26 07:36:37 np0005536586 systemd[74086]: Created slice User Application Slice.
Nov 26 07:36:37 np0005536586 systemd[74086]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 26 07:36:37 np0005536586 systemd[74086]: Started Daily Cleanup of User's Temporary Directories.
Nov 26 07:36:37 np0005536586 systemd[74086]: Reached target Paths.
Nov 26 07:36:37 np0005536586 systemd[74086]: Reached target Timers.
Nov 26 07:36:37 np0005536586 systemd[74086]: Starting D-Bus User Message Bus Socket...
Nov 26 07:36:37 np0005536586 systemd[74086]: Starting Create User's Volatile Files and Directories...
Nov 26 07:36:37 np0005536586 systemd[74086]: Listening on D-Bus User Message Bus Socket.
Nov 26 07:36:37 np0005536586 systemd[74086]: Reached target Sockets.
Nov 26 07:36:37 np0005536586 systemd[74086]: Finished Create User's Volatile Files and Directories.
Nov 26 07:36:37 np0005536586 systemd[74086]: Reached target Basic System.
Nov 26 07:36:37 np0005536586 systemd[74086]: Reached target Main User Target.
Nov 26 07:36:37 np0005536586 systemd[74086]: Startup finished in 89ms.
Nov 26 07:36:37 np0005536586 systemd[1]: Started User Manager for UID 42477.
Nov 26 07:36:37 np0005536586 systemd[1]: Started Session 18 of User ceph-admin.
Nov 26 07:36:37 np0005536586 systemd[1]: session-18.scope: Deactivated successfully.
Nov 26 07:36:37 np0005536586 systemd-logind[777]: Session 18 logged out. Waiting for processes to exit.
Nov 26 07:36:37 np0005536586 systemd-logind[777]: Removed session 18.
Nov 26 07:36:40 np0005536586 systemd[1]: var-lib-containers-storage-overlay-compat587868829-lower\x2dmapped.mount: Deactivated successfully.
Nov 26 07:36:48 np0005536586 systemd[1]: Stopping User Manager for UID 42477...
Nov 26 07:36:48 np0005536586 systemd[74086]: Activating special unit Exit the Session...
Nov 26 07:36:48 np0005536586 systemd[74086]: Stopped target Main User Target.
Nov 26 07:36:48 np0005536586 systemd[74086]: Stopped target Basic System.
Nov 26 07:36:48 np0005536586 systemd[74086]: Stopped target Paths.
Nov 26 07:36:48 np0005536586 systemd[74086]: Stopped target Sockets.
Nov 26 07:36:48 np0005536586 systemd[74086]: Stopped target Timers.
Nov 26 07:36:48 np0005536586 systemd[74086]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 26 07:36:48 np0005536586 systemd[74086]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 26 07:36:48 np0005536586 systemd[74086]: Closed D-Bus User Message Bus Socket.
Nov 26 07:36:48 np0005536586 systemd[74086]: Stopped Create User's Volatile Files and Directories.
Nov 26 07:36:48 np0005536586 systemd[74086]: Removed slice User Application Slice.
Nov 26 07:36:48 np0005536586 systemd[74086]: Reached target Shutdown.
Nov 26 07:36:48 np0005536586 systemd[74086]: Finished Exit the Session.
Nov 26 07:36:48 np0005536586 systemd[74086]: Reached target Exit the Session.
Nov 26 07:36:48 np0005536586 systemd[1]: user@42477.service: Deactivated successfully.
Nov 26 07:36:48 np0005536586 systemd[1]: Stopped User Manager for UID 42477.
Nov 26 07:36:48 np0005536586 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 26 07:36:48 np0005536586 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 26 07:36:48 np0005536586 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 26 07:36:48 np0005536586 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 26 07:36:48 np0005536586 systemd[1]: Removed slice User Slice of UID 42477.
Nov 26 07:36:51 np0005536586 podman[74140]: 2025-11-26 12:36:51.529388351 +0000 UTC m=+13.534072195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:36:51 np0005536586 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 07:36:51 np0005536586 podman[74189]: 2025-11-26 12:36:51.576709682 +0000 UTC m=+0.029980394 container create 5588238d63ba7bcae7a1c8e5d8cf7c6ab9de211978922fc59935c5c6672017d9 (image=quay.io/ceph/ceph:v18, name=modest_lewin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 07:36:51 np0005536586 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck3259280976-merged.mount: Deactivated successfully.
Nov 26 07:36:51 np0005536586 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 26 07:36:51 np0005536586 systemd[1]: Started libpod-conmon-5588238d63ba7bcae7a1c8e5d8cf7c6ab9de211978922fc59935c5c6672017d9.scope.
Nov 26 07:36:51 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:36:51 np0005536586 podman[74189]: 2025-11-26 12:36:51.639059281 +0000 UTC m=+0.092330013 container init 5588238d63ba7bcae7a1c8e5d8cf7c6ab9de211978922fc59935c5c6672017d9 (image=quay.io/ceph/ceph:v18, name=modest_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 07:36:51 np0005536586 podman[74189]: 2025-11-26 12:36:51.644408391 +0000 UTC m=+0.097679103 container start 5588238d63ba7bcae7a1c8e5d8cf7c6ab9de211978922fc59935c5c6672017d9 (image=quay.io/ceph/ceph:v18, name=modest_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 07:36:51 np0005536586 podman[74189]: 2025-11-26 12:36:51.645345285 +0000 UTC m=+0.098615998 container attach 5588238d63ba7bcae7a1c8e5d8cf7c6ab9de211978922fc59935c5c6672017d9 (image=quay.io/ceph/ceph:v18, name=modest_lewin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 07:36:51 np0005536586 podman[74189]: 2025-11-26 12:36:51.563778705 +0000 UTC m=+0.017049437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:36:51 np0005536586 modest_lewin[74202]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 26 07:36:51 np0005536586 systemd[1]: libpod-5588238d63ba7bcae7a1c8e5d8cf7c6ab9de211978922fc59935c5c6672017d9.scope: Deactivated successfully.
Nov 26 07:36:51 np0005536586 podman[74189]: 2025-11-26 12:36:51.895637856 +0000 UTC m=+0.348908568 container died 5588238d63ba7bcae7a1c8e5d8cf7c6ab9de211978922fc59935c5c6672017d9 (image=quay.io/ceph/ceph:v18, name=modest_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:36:51 np0005536586 podman[74189]: 2025-11-26 12:36:51.91909607 +0000 UTC m=+0.372366781 container remove 5588238d63ba7bcae7a1c8e5d8cf7c6ab9de211978922fc59935c5c6672017d9 (image=quay.io/ceph/ceph:v18, name=modest_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 07:36:51 np0005536586 systemd[1]: libpod-conmon-5588238d63ba7bcae7a1c8e5d8cf7c6ab9de211978922fc59935c5c6672017d9.scope: Deactivated successfully.
Nov 26 07:36:51 np0005536586 podman[74217]: 2025-11-26 12:36:51.960555795 +0000 UTC m=+0.026568025 container create 148a4c1411d39c92ce065d462a21c67eb71f14acf4ea67e415d3177bd6b91fda (image=quay.io/ceph/ceph:v18, name=practical_moser, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 07:36:51 np0005536586 systemd[1]: Started libpod-conmon-148a4c1411d39c92ce065d462a21c67eb71f14acf4ea67e415d3177bd6b91fda.scope.
Nov 26 07:36:51 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:36:51 np0005536586 podman[74217]: 2025-11-26 12:36:51.997156614 +0000 UTC m=+0.063168854 container init 148a4c1411d39c92ce065d462a21c67eb71f14acf4ea67e415d3177bd6b91fda (image=quay.io/ceph/ceph:v18, name=practical_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 07:36:52 np0005536586 podman[74217]: 2025-11-26 12:36:52.001293699 +0000 UTC m=+0.067305919 container start 148a4c1411d39c92ce065d462a21c67eb71f14acf4ea67e415d3177bd6b91fda (image=quay.io/ceph/ceph:v18, name=practical_moser, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:36:52 np0005536586 podman[74217]: 2025-11-26 12:36:52.002547202 +0000 UTC m=+0.068559432 container attach 148a4c1411d39c92ce065d462a21c67eb71f14acf4ea67e415d3177bd6b91fda (image=quay.io/ceph/ceph:v18, name=practical_moser, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:36:52 np0005536586 practical_moser[74230]: 167 167
Nov 26 07:36:52 np0005536586 systemd[1]: libpod-148a4c1411d39c92ce065d462a21c67eb71f14acf4ea67e415d3177bd6b91fda.scope: Deactivated successfully.
Nov 26 07:36:52 np0005536586 podman[74217]: 2025-11-26 12:36:52.003914448 +0000 UTC m=+0.069926668 container died 148a4c1411d39c92ce065d462a21c67eb71f14acf4ea67e415d3177bd6b91fda (image=quay.io/ceph/ceph:v18, name=practical_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:36:52 np0005536586 podman[74217]: 2025-11-26 12:36:52.018712082 +0000 UTC m=+0.084724302 container remove 148a4c1411d39c92ce065d462a21c67eb71f14acf4ea67e415d3177bd6b91fda (image=quay.io/ceph/ceph:v18, name=practical_moser, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:36:52 np0005536586 podman[74217]: 2025-11-26 12:36:51.949772705 +0000 UTC m=+0.015784946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:36:52 np0005536586 systemd[1]: libpod-conmon-148a4c1411d39c92ce065d462a21c67eb71f14acf4ea67e415d3177bd6b91fda.scope: Deactivated successfully.
Nov 26 07:36:52 np0005536586 podman[74244]: 2025-11-26 12:36:52.060877376 +0000 UTC m=+0.028322612 container create 08214e560029fef339794ca7bc3622528f4e72efb8cf0483ad3bf39ee0e4f362 (image=quay.io/ceph/ceph:v18, name=eloquent_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 07:36:52 np0005536586 systemd[1]: Started libpod-conmon-08214e560029fef339794ca7bc3622528f4e72efb8cf0483ad3bf39ee0e4f362.scope.
Nov 26 07:36:52 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:36:52 np0005536586 podman[74244]: 2025-11-26 12:36:52.100975766 +0000 UTC m=+0.068421011 container init 08214e560029fef339794ca7bc3622528f4e72efb8cf0483ad3bf39ee0e4f362 (image=quay.io/ceph/ceph:v18, name=eloquent_dirac, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 07:36:52 np0005536586 podman[74244]: 2025-11-26 12:36:52.105028402 +0000 UTC m=+0.072473638 container start 08214e560029fef339794ca7bc3622528f4e72efb8cf0483ad3bf39ee0e4f362 (image=quay.io/ceph/ceph:v18, name=eloquent_dirac, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:36:52 np0005536586 podman[74244]: 2025-11-26 12:36:52.106129547 +0000 UTC m=+0.073574782 container attach 08214e560029fef339794ca7bc3622528f4e72efb8cf0483ad3bf39ee0e4f362 (image=quay.io/ceph/ceph:v18, name=eloquent_dirac, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:36:52 np0005536586 eloquent_dirac[74257]: AQBk9CZp4cAuBxAAgg2KySB/jebvKIranhCLbw==
Nov 26 07:36:52 np0005536586 systemd[1]: libpod-08214e560029fef339794ca7bc3622528f4e72efb8cf0483ad3bf39ee0e4f362.scope: Deactivated successfully.
Nov 26 07:36:52 np0005536586 podman[74244]: 2025-11-26 12:36:52.122644888 +0000 UTC m=+0.090090123 container died 08214e560029fef339794ca7bc3622528f4e72efb8cf0483ad3bf39ee0e4f362 (image=quay.io/ceph/ceph:v18, name=eloquent_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 07:36:52 np0005536586 podman[74244]: 2025-11-26 12:36:52.13734036 +0000 UTC m=+0.104785595 container remove 08214e560029fef339794ca7bc3622528f4e72efb8cf0483ad3bf39ee0e4f362 (image=quay.io/ceph/ceph:v18, name=eloquent_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 07:36:52 np0005536586 podman[74244]: 2025-11-26 12:36:52.049391743 +0000 UTC m=+0.016836978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:36:52 np0005536586 systemd[1]: libpod-conmon-08214e560029fef339794ca7bc3622528f4e72efb8cf0483ad3bf39ee0e4f362.scope: Deactivated successfully.
Nov 26 07:36:52 np0005536586 podman[74271]: 2025-11-26 12:36:52.176592393 +0000 UTC m=+0.026125130 container create 8003a8f8c668d8726d8c56403fb1a53e07b0984d511e3bfa7b3e50b1d981d34f (image=quay.io/ceph/ceph:v18, name=sweet_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 07:36:52 np0005536586 systemd[1]: Started libpod-conmon-8003a8f8c668d8726d8c56403fb1a53e07b0984d511e3bfa7b3e50b1d981d34f.scope.
Nov 26 07:36:52 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:36:52 np0005536586 podman[74271]: 2025-11-26 12:36:52.214867518 +0000 UTC m=+0.064400274 container init 8003a8f8c668d8726d8c56403fb1a53e07b0984d511e3bfa7b3e50b1d981d34f (image=quay.io/ceph/ceph:v18, name=sweet_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:36:52 np0005536586 podman[74271]: 2025-11-26 12:36:52.220431062 +0000 UTC m=+0.069963798 container start 8003a8f8c668d8726d8c56403fb1a53e07b0984d511e3bfa7b3e50b1d981d34f (image=quay.io/ceph/ceph:v18, name=sweet_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 07:36:52 np0005536586 podman[74271]: 2025-11-26 12:36:52.221612408 +0000 UTC m=+0.071145144 container attach 8003a8f8c668d8726d8c56403fb1a53e07b0984d511e3bfa7b3e50b1d981d34f (image=quay.io/ceph/ceph:v18, name=sweet_ardinghelli, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:36:52 np0005536586 sweet_ardinghelli[74287]: AQBk9CZpikwEDhAAdJSpQN1kvWuQUK/eKTCybg==
Nov 26 07:36:52 np0005536586 systemd[1]: libpod-8003a8f8c668d8726d8c56403fb1a53e07b0984d511e3bfa7b3e50b1d981d34f.scope: Deactivated successfully.
Nov 26 07:36:52 np0005536586 podman[74271]: 2025-11-26 12:36:52.237452246 +0000 UTC m=+0.086984982 container died 8003a8f8c668d8726d8c56403fb1a53e07b0984d511e3bfa7b3e50b1d981d34f (image=quay.io/ceph/ceph:v18, name=sweet_ardinghelli, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:36:52 np0005536586 podman[74271]: 2025-11-26 12:36:52.252578599 +0000 UTC m=+0.102111336 container remove 8003a8f8c668d8726d8c56403fb1a53e07b0984d511e3bfa7b3e50b1d981d34f (image=quay.io/ceph/ceph:v18, name=sweet_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:36:52 np0005536586 podman[74271]: 2025-11-26 12:36:52.166323023 +0000 UTC m=+0.015855779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:36:52 np0005536586 systemd[1]: libpod-conmon-8003a8f8c668d8726d8c56403fb1a53e07b0984d511e3bfa7b3e50b1d981d34f.scope: Deactivated successfully.
Nov 26 07:36:52 np0005536586 podman[74302]: 2025-11-26 12:36:52.295454963 +0000 UTC m=+0.028669404 container create 89efebb3a7f6fdbe364bd9fd93f1109f070c79d12bc11230c4de723706e8213b (image=quay.io/ceph/ceph:v18, name=sad_poitras, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:36:52 np0005536586 systemd[1]: Started libpod-conmon-89efebb3a7f6fdbe364bd9fd93f1109f070c79d12bc11230c4de723706e8213b.scope.
Nov 26 07:36:52 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:36:52 np0005536586 podman[74302]: 2025-11-26 12:36:52.329934705 +0000 UTC m=+0.063149167 container init 89efebb3a7f6fdbe364bd9fd93f1109f070c79d12bc11230c4de723706e8213b (image=quay.io/ceph/ceph:v18, name=sad_poitras, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:36:52 np0005536586 podman[74302]: 2025-11-26 12:36:52.333894647 +0000 UTC m=+0.067109089 container start 89efebb3a7f6fdbe364bd9fd93f1109f070c79d12bc11230c4de723706e8213b (image=quay.io/ceph/ceph:v18, name=sad_poitras, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:36:52 np0005536586 podman[74302]: 2025-11-26 12:36:52.334990453 +0000 UTC m=+0.068204904 container attach 89efebb3a7f6fdbe364bd9fd93f1109f070c79d12bc11230c4de723706e8213b (image=quay.io/ceph/ceph:v18, name=sad_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:36:52 np0005536586 sad_poitras[74319]: AQBk9CZp7pXKFBAA12r1szxr0Tbew8rIVdADTA==
Nov 26 07:36:52 np0005536586 systemd[1]: libpod-89efebb3a7f6fdbe364bd9fd93f1109f070c79d12bc11230c4de723706e8213b.scope: Deactivated successfully.
Nov 26 07:36:52 np0005536586 podman[74302]: 2025-11-26 12:36:52.350980293 +0000 UTC m=+0.084194734 container died 89efebb3a7f6fdbe364bd9fd93f1109f070c79d12bc11230c4de723706e8213b (image=quay.io/ceph/ceph:v18, name=sad_poitras, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 07:36:52 np0005536586 podman[74302]: 2025-11-26 12:36:52.36705365 +0000 UTC m=+0.100268092 container remove 89efebb3a7f6fdbe364bd9fd93f1109f070c79d12bc11230c4de723706e8213b (image=quay.io/ceph/ceph:v18, name=sad_poitras, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 07:36:52 np0005536586 podman[74302]: 2025-11-26 12:36:52.284219682 +0000 UTC m=+0.017434133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:36:52 np0005536586 systemd[1]: libpod-conmon-89efebb3a7f6fdbe364bd9fd93f1109f070c79d12bc11230c4de723706e8213b.scope: Deactivated successfully.
Nov 26 07:36:52 np0005536586 podman[74336]: 2025-11-26 12:36:52.410091058 +0000 UTC m=+0.027888894 container create 1324feacc003d029f63156c2c8f2182f24b52cee524910dc10f39a5a182bac32 (image=quay.io/ceph/ceph:v18, name=wonderful_buck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:36:52 np0005536586 systemd[1]: Started libpod-conmon-1324feacc003d029f63156c2c8f2182f24b52cee524910dc10f39a5a182bac32.scope.
Nov 26 07:36:52 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:36:52 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a59a3711e846c79c6a8a16683a4732f85f4452140d4e5abca66938bded86a14b/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:52 np0005536586 podman[74336]: 2025-11-26 12:36:52.452505924 +0000 UTC m=+0.070303769 container init 1324feacc003d029f63156c2c8f2182f24b52cee524910dc10f39a5a182bac32 (image=quay.io/ceph/ceph:v18, name=wonderful_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:36:52 np0005536586 podman[74336]: 2025-11-26 12:36:52.456641015 +0000 UTC m=+0.074438851 container start 1324feacc003d029f63156c2c8f2182f24b52cee524910dc10f39a5a182bac32 (image=quay.io/ceph/ceph:v18, name=wonderful_buck, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 26 07:36:52 np0005536586 podman[74336]: 2025-11-26 12:36:52.457729687 +0000 UTC m=+0.075527522 container attach 1324feacc003d029f63156c2c8f2182f24b52cee524910dc10f39a5a182bac32 (image=quay.io/ceph/ceph:v18, name=wonderful_buck, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:36:52 np0005536586 wonderful_buck[74349]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 26 07:36:52 np0005536586 wonderful_buck[74349]: setting min_mon_release = pacific
Nov 26 07:36:52 np0005536586 wonderful_buck[74349]: /usr/bin/monmaptool: set fsid to f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 07:36:52 np0005536586 wonderful_buck[74349]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 26 07:36:52 np0005536586 systemd[1]: libpod-1324feacc003d029f63156c2c8f2182f24b52cee524910dc10f39a5a182bac32.scope: Deactivated successfully.
Nov 26 07:36:52 np0005536586 podman[74336]: 2025-11-26 12:36:52.479607893 +0000 UTC m=+0.097405729 container died 1324feacc003d029f63156c2c8f2182f24b52cee524910dc10f39a5a182bac32 (image=quay.io/ceph/ceph:v18, name=wonderful_buck, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 07:36:52 np0005536586 podman[74336]: 2025-11-26 12:36:52.495566575 +0000 UTC m=+0.113364410 container remove 1324feacc003d029f63156c2c8f2182f24b52cee524910dc10f39a5a182bac32 (image=quay.io/ceph/ceph:v18, name=wonderful_buck, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:36:52 np0005536586 podman[74336]: 2025-11-26 12:36:52.398925579 +0000 UTC m=+0.016723434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:36:52 np0005536586 systemd[1]: libpod-conmon-1324feacc003d029f63156c2c8f2182f24b52cee524910dc10f39a5a182bac32.scope: Deactivated successfully.
Nov 26 07:36:52 np0005536586 podman[74365]: 2025-11-26 12:36:52.539463964 +0000 UTC m=+0.028021785 container create 8797bbb9ba7b0f2ca1e9d46f22773c883488e9ae10936cf4f59923bbbac05608 (image=quay.io/ceph/ceph:v18, name=interesting_nash, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 26 07:36:52 np0005536586 systemd[1]: var-lib-containers-storage-overlay-931fbe0a52ff430b9fff5ae8c2d892c45edc17a5e7017fb19ac58ef2482437cf-merged.mount: Deactivated successfully.
Nov 26 07:36:52 np0005536586 systemd[1]: Started libpod-conmon-8797bbb9ba7b0f2ca1e9d46f22773c883488e9ae10936cf4f59923bbbac05608.scope.
Nov 26 07:36:52 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:36:52 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53642fc10fbb75cd2484c7431343462e8c7e6ad70d9f9eb433aaeda8e50b60b/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:52 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53642fc10fbb75cd2484c7431343462e8c7e6ad70d9f9eb433aaeda8e50b60b/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:52 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53642fc10fbb75cd2484c7431343462e8c7e6ad70d9f9eb433aaeda8e50b60b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:52 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53642fc10fbb75cd2484c7431343462e8c7e6ad70d9f9eb433aaeda8e50b60b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:52 np0005536586 podman[74365]: 2025-11-26 12:36:52.5883166 +0000 UTC m=+0.076874431 container init 8797bbb9ba7b0f2ca1e9d46f22773c883488e9ae10936cf4f59923bbbac05608 (image=quay.io/ceph/ceph:v18, name=interesting_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:36:52 np0005536586 podman[74365]: 2025-11-26 12:36:52.592112873 +0000 UTC m=+0.080670694 container start 8797bbb9ba7b0f2ca1e9d46f22773c883488e9ae10936cf4f59923bbbac05608 (image=quay.io/ceph/ceph:v18, name=interesting_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 07:36:52 np0005536586 podman[74365]: 2025-11-26 12:36:52.593127705 +0000 UTC m=+0.081685536 container attach 8797bbb9ba7b0f2ca1e9d46f22773c883488e9ae10936cf4f59923bbbac05608 (image=quay.io/ceph/ceph:v18, name=interesting_nash, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 07:36:52 np0005536586 podman[74365]: 2025-11-26 12:36:52.528103977 +0000 UTC m=+0.016661818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:36:52 np0005536586 systemd[1]: libpod-8797bbb9ba7b0f2ca1e9d46f22773c883488e9ae10936cf4f59923bbbac05608.scope: Deactivated successfully.
Nov 26 07:36:52 np0005536586 podman[74365]: 2025-11-26 12:36:52.632578444 +0000 UTC m=+0.121136265 container died 8797bbb9ba7b0f2ca1e9d46f22773c883488e9ae10936cf4f59923bbbac05608 (image=quay.io/ceph/ceph:v18, name=interesting_nash, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:36:52 np0005536586 systemd[1]: var-lib-containers-storage-overlay-f53642fc10fbb75cd2484c7431343462e8c7e6ad70d9f9eb433aaeda8e50b60b-merged.mount: Deactivated successfully.
Nov 26 07:36:52 np0005536586 podman[74365]: 2025-11-26 12:36:52.6480194 +0000 UTC m=+0.136577221 container remove 8797bbb9ba7b0f2ca1e9d46f22773c883488e9ae10936cf4f59923bbbac05608 (image=quay.io/ceph/ceph:v18, name=interesting_nash, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 07:36:52 np0005536586 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 07:36:52 np0005536586 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 07:36:52 np0005536586 systemd[1]: libpod-conmon-8797bbb9ba7b0f2ca1e9d46f22773c883488e9ae10936cf4f59923bbbac05608.scope: Deactivated successfully.
Nov 26 07:36:52 np0005536586 systemd[1]: Reloading.
Nov 26 07:36:52 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:36:52 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:36:52 np0005536586 systemd[1]: Reloading.
Nov 26 07:36:52 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:36:52 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:36:53 np0005536586 systemd[1]: Reached target All Ceph clusters and services.
Nov 26 07:36:53 np0005536586 systemd[1]: Reloading.
Nov 26 07:36:53 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:36:53 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:36:53 np0005536586 systemd[1]: Reached target Ceph cluster f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 07:36:53 np0005536586 systemd[1]: Reloading.
Nov 26 07:36:53 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:36:53 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:36:53 np0005536586 systemd[1]: Reloading.
Nov 26 07:36:53 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:36:53 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:36:53 np0005536586 systemd[1]: Created slice Slice /system/ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 07:36:53 np0005536586 systemd[1]: Reached target System Time Set.
Nov 26 07:36:53 np0005536586 systemd[1]: Reached target System Time Synchronized.
Nov 26 07:36:53 np0005536586 systemd[1]: Starting Ceph mon.compute-0 for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 07:36:53 np0005536586 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 07:36:53 np0005536586 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 07:36:53 np0005536586 podman[74643]: 2025-11-26 12:36:53.829444606 +0000 UTC m=+0.026843515 container create dbc7bfa56c05965b50c5f72b9ecc884eef99bde2350df7b1e35e6cb0197d6d6e (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:36:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ddbe35422285337201c75b341abbc4f716cb469c9e55edb3b7035d51f06188/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ddbe35422285337201c75b341abbc4f716cb469c9e55edb3b7035d51f06188/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ddbe35422285337201c75b341abbc4f716cb469c9e55edb3b7035d51f06188/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ddbe35422285337201c75b341abbc4f716cb469c9e55edb3b7035d51f06188/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:53 np0005536586 podman[74643]: 2025-11-26 12:36:53.871868157 +0000 UTC m=+0.069267056 container init dbc7bfa56c05965b50c5f72b9ecc884eef99bde2350df7b1e35e6cb0197d6d6e (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 07:36:53 np0005536586 podman[74643]: 2025-11-26 12:36:53.876908744 +0000 UTC m=+0.074307644 container start dbc7bfa56c05965b50c5f72b9ecc884eef99bde2350df7b1e35e6cb0197d6d6e (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 07:36:53 np0005536586 bash[74643]: dbc7bfa56c05965b50c5f72b9ecc884eef99bde2350df7b1e35e6cb0197d6d6e
Nov 26 07:36:53 np0005536586 podman[74643]: 2025-11-26 12:36:53.817277648 +0000 UTC m=+0.014676567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:36:53 np0005536586 systemd[1]: Started Ceph mon.compute-0 for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: pidfile_write: ignore empty --pid-file
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: load: jerasure load: lrc 
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: RocksDB version: 7.9.2
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Git sha 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: DB SUMMARY
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: DB Session ID:  YBP93YZ1IQGH1EXX8KK1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: CURRENT file:  CURRENT
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                         Options.error_if_exists: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                       Options.create_if_missing: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                                     Options.env: 0x55c901bffc40
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                                Options.info_log: 0x55c903c6ce80
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                              Options.statistics: (nil)
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                               Options.use_fsync: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                              Options.db_log_dir: 
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                                 Options.wal_dir: 
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                    Options.write_buffer_manager: 0x55c903c7cb40
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                  Options.unordered_write: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                               Options.row_cache: None
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                              Options.wal_filter: None
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.two_write_queues: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.wal_compression: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.atomic_flush: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.max_background_jobs: 2
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.max_background_compactions: -1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.max_subcompactions: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.max_total_wal_size: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                          Options.max_open_files: -1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:       Options.compaction_readahead_size: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Compression algorithms supported:
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: #011kZSTD supported: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: #011kXpressCompression supported: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: #011kBZip2Compression supported: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: #011kLZ4Compression supported: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: #011kZlibCompression supported: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: #011kSnappyCompression supported: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:           Options.merge_operator: 
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:        Options.compaction_filter: None
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c903c6ca80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c903c651f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:        Options.write_buffer_size: 33554432
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:  Options.max_write_buffer_number: 2
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:          Options.compression: NoCompression
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.num_levels: 7
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 360f285c-8dc8-4f98-b8a2-efdebada3f64
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160613908101, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160613908845, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160613, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "YBP93YZ1IQGH1EXX8KK1", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160613908921, "job": 1, "event": "recovery_finished"}
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c903c8ee00
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: DB pointer 0x55c903d18000
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.6      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.6      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.6      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.6      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.34 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.34 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c903c651f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9e-06 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@-1(???) e0 preinit fsid f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC 7763 64-Core Processor,created_at=2025-11-26T12:36:52.617229Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:04:00.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865364,os=Linux}
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).mds e1 new map
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: log_channel(cluster) log [DBG] : fsmap 
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mkfs f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 07:36:53 np0005536586 podman[74660]: 2025-11-26 12:36:53.930829039 +0000 UTC m=+0.032652549 container create 3dd195acac6e03113b31c8ead253800a066fc49f6f56de6160579eec019908da (image=quay.io/ceph/ceph:v18, name=pensive_goldwasser, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 26 07:36:53 np0005536586 ceph-mon[74659]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 07:36:53 np0005536586 systemd[1]: Started libpod-conmon-3dd195acac6e03113b31c8ead253800a066fc49f6f56de6160579eec019908da.scope.
Nov 26 07:36:53 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:36:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c0462980420487a1263cd58b785caf21a67177a511149b33ae6e2b95d2f957/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c0462980420487a1263cd58b785caf21a67177a511149b33ae6e2b95d2f957/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c0462980420487a1263cd58b785caf21a67177a511149b33ae6e2b95d2f957/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:53 np0005536586 podman[74660]: 2025-11-26 12:36:53.988847687 +0000 UTC m=+0.090671206 container init 3dd195acac6e03113b31c8ead253800a066fc49f6f56de6160579eec019908da (image=quay.io/ceph/ceph:v18, name=pensive_goldwasser, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 07:36:53 np0005536586 podman[74660]: 2025-11-26 12:36:53.993058923 +0000 UTC m=+0.094882432 container start 3dd195acac6e03113b31c8ead253800a066fc49f6f56de6160579eec019908da (image=quay.io/ceph/ceph:v18, name=pensive_goldwasser, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 07:36:53 np0005536586 podman[74660]: 2025-11-26 12:36:53.994069899 +0000 UTC m=+0.095893407 container attach 3dd195acac6e03113b31c8ead253800a066fc49f6f56de6160579eec019908da (image=quay.io/ceph/ceph:v18, name=pensive_goldwasser, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 26 07:36:54 np0005536586 podman[74660]: 2025-11-26 12:36:53.91729086 +0000 UTC m=+0.019114389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:36:54 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 26 07:36:54 np0005536586 ceph-mon[74659]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/221525536' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 26 07:36:54 np0005536586 pensive_goldwasser[74711]:  cluster:
Nov 26 07:36:54 np0005536586 pensive_goldwasser[74711]:    id:     f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 07:36:54 np0005536586 pensive_goldwasser[74711]:    health: HEALTH_OK
Nov 26 07:36:54 np0005536586 pensive_goldwasser[74711]: 
Nov 26 07:36:54 np0005536586 pensive_goldwasser[74711]:  services:
Nov 26 07:36:54 np0005536586 pensive_goldwasser[74711]:    mon: 1 daemons, quorum compute-0 (age 0.391389s)
Nov 26 07:36:54 np0005536586 pensive_goldwasser[74711]:    mgr: no daemons active
Nov 26 07:36:54 np0005536586 pensive_goldwasser[74711]:    osd: 0 osds: 0 up, 0 in
Nov 26 07:36:54 np0005536586 pensive_goldwasser[74711]: 
Nov 26 07:36:54 np0005536586 pensive_goldwasser[74711]:  data:
Nov 26 07:36:54 np0005536586 pensive_goldwasser[74711]:    pools:   0 pools, 0 pgs
Nov 26 07:36:54 np0005536586 pensive_goldwasser[74711]:    objects: 0 objects, 0 B
Nov 26 07:36:54 np0005536586 pensive_goldwasser[74711]:    usage:   0 B used, 0 B / 0 B avail
Nov 26 07:36:54 np0005536586 pensive_goldwasser[74711]:    pgs:     
Nov 26 07:36:54 np0005536586 pensive_goldwasser[74711]: 
Nov 26 07:36:54 np0005536586 systemd[1]: libpod-3dd195acac6e03113b31c8ead253800a066fc49f6f56de6160579eec019908da.scope: Deactivated successfully.
Nov 26 07:36:54 np0005536586 podman[74660]: 2025-11-26 12:36:54.33178796 +0000 UTC m=+0.433611468 container died 3dd195acac6e03113b31c8ead253800a066fc49f6f56de6160579eec019908da (image=quay.io/ceph/ceph:v18, name=pensive_goldwasser, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:36:54 np0005536586 podman[74660]: 2025-11-26 12:36:54.353701312 +0000 UTC m=+0.455524821 container remove 3dd195acac6e03113b31c8ead253800a066fc49f6f56de6160579eec019908da (image=quay.io/ceph/ceph:v18, name=pensive_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 07:36:54 np0005536586 systemd[1]: libpod-conmon-3dd195acac6e03113b31c8ead253800a066fc49f6f56de6160579eec019908da.scope: Deactivated successfully.
Nov 26 07:36:54 np0005536586 podman[74747]: 2025-11-26 12:36:54.39153831 +0000 UTC m=+0.023944440 container create fc10f54b58bf7e44b173958409fc3728da00ffacbe6ea3ef6913c4c1027d43bd (image=quay.io/ceph/ceph:v18, name=vigorous_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:36:54 np0005536586 systemd[1]: Started libpod-conmon-fc10f54b58bf7e44b173958409fc3728da00ffacbe6ea3ef6913c4c1027d43bd.scope.
Nov 26 07:36:54 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:36:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cf7a15c6d5798da95aa689c8b533a72676bb742a19c5c50284b9e53c55cd53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cf7a15c6d5798da95aa689c8b533a72676bb742a19c5c50284b9e53c55cd53/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cf7a15c6d5798da95aa689c8b533a72676bb742a19c5c50284b9e53c55cd53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cf7a15c6d5798da95aa689c8b533a72676bb742a19c5c50284b9e53c55cd53/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:54 np0005536586 podman[74747]: 2025-11-26 12:36:54.43843014 +0000 UTC m=+0.070836281 container init fc10f54b58bf7e44b173958409fc3728da00ffacbe6ea3ef6913c4c1027d43bd (image=quay.io/ceph/ceph:v18, name=vigorous_yonath, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:36:54 np0005536586 podman[74747]: 2025-11-26 12:36:54.442678456 +0000 UTC m=+0.075084577 container start fc10f54b58bf7e44b173958409fc3728da00ffacbe6ea3ef6913c4c1027d43bd (image=quay.io/ceph/ceph:v18, name=vigorous_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 07:36:54 np0005536586 podman[74747]: 2025-11-26 12:36:54.443866094 +0000 UTC m=+0.076272215 container attach fc10f54b58bf7e44b173958409fc3728da00ffacbe6ea3ef6913c4c1027d43bd (image=quay.io/ceph/ceph:v18, name=vigorous_yonath, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:36:54 np0005536586 podman[74747]: 2025-11-26 12:36:54.381794038 +0000 UTC m=+0.014200178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:36:54 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 26 07:36:54 np0005536586 ceph-mon[74659]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/24887754' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 07:36:54 np0005536586 ceph-mon[74659]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/24887754' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 26 07:36:54 np0005536586 vigorous_yonath[74761]: 
Nov 26 07:36:54 np0005536586 vigorous_yonath[74761]: [global]
Nov 26 07:36:54 np0005536586 vigorous_yonath[74761]: #011fsid = f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 07:36:54 np0005536586 vigorous_yonath[74761]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 26 07:36:54 np0005536586 vigorous_yonath[74761]: #011osd_crush_chooseleaf_type = 0
Nov 26 07:36:54 np0005536586 systemd[1]: libpod-fc10f54b58bf7e44b173958409fc3728da00ffacbe6ea3ef6913c4c1027d43bd.scope: Deactivated successfully.
Nov 26 07:36:54 np0005536586 podman[74787]: 2025-11-26 12:36:54.790951797 +0000 UTC m=+0.017010485 container died fc10f54b58bf7e44b173958409fc3728da00ffacbe6ea3ef6913c4c1027d43bd (image=quay.io/ceph/ceph:v18, name=vigorous_yonath, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:36:54 np0005536586 systemd[1]: var-lib-containers-storage-overlay-f0cf7a15c6d5798da95aa689c8b533a72676bb742a19c5c50284b9e53c55cd53-merged.mount: Deactivated successfully.
Nov 26 07:36:54 np0005536586 podman[74787]: 2025-11-26 12:36:54.809399219 +0000 UTC m=+0.035457897 container remove fc10f54b58bf7e44b173958409fc3728da00ffacbe6ea3ef6913c4c1027d43bd (image=quay.io/ceph/ceph:v18, name=vigorous_yonath, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:36:54 np0005536586 systemd[1]: libpod-conmon-fc10f54b58bf7e44b173958409fc3728da00ffacbe6ea3ef6913c4c1027d43bd.scope: Deactivated successfully.
Nov 26 07:36:54 np0005536586 podman[74798]: 2025-11-26 12:36:54.852041852 +0000 UTC m=+0.025910615 container create 3acb116bcd8d26a79e998f07c06dd8b7b6f4b7b03dd7d8bf29a9e8544a8a313d (image=quay.io/ceph/ceph:v18, name=vibrant_grothendieck, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:36:54 np0005536586 systemd[1]: Started libpod-conmon-3acb116bcd8d26a79e998f07c06dd8b7b6f4b7b03dd7d8bf29a9e8544a8a313d.scope.
Nov 26 07:36:54 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:36:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476ddfe87045a6536ba27bb0746362643d20d3452b98e22e85ca253d1f458492/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476ddfe87045a6536ba27bb0746362643d20d3452b98e22e85ca253d1f458492/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476ddfe87045a6536ba27bb0746362643d20d3452b98e22e85ca253d1f458492/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476ddfe87045a6536ba27bb0746362643d20d3452b98e22e85ca253d1f458492/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:54 np0005536586 podman[74798]: 2025-11-26 12:36:54.90461563 +0000 UTC m=+0.078484383 container init 3acb116bcd8d26a79e998f07c06dd8b7b6f4b7b03dd7d8bf29a9e8544a8a313d (image=quay.io/ceph/ceph:v18, name=vibrant_grothendieck, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:36:54 np0005536586 podman[74798]: 2025-11-26 12:36:54.910449583 +0000 UTC m=+0.084318336 container start 3acb116bcd8d26a79e998f07c06dd8b7b6f4b7b03dd7d8bf29a9e8544a8a313d (image=quay.io/ceph/ceph:v18, name=vibrant_grothendieck, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:36:54 np0005536586 podman[74798]: 2025-11-26 12:36:54.911642291 +0000 UTC m=+0.085511043 container attach 3acb116bcd8d26a79e998f07c06dd8b7b6f4b7b03dd7d8bf29a9e8544a8a313d (image=quay.io/ceph/ceph:v18, name=vibrant_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:36:54 np0005536586 ceph-mon[74659]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 07:36:54 np0005536586 ceph-mon[74659]: from='client.? 192.168.122.100:0/24887754' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 07:36:54 np0005536586 ceph-mon[74659]: from='client.? 192.168.122.100:0/24887754' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 26 07:36:54 np0005536586 podman[74798]: 2025-11-26 12:36:54.84230228 +0000 UTC m=+0.016171053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:36:55 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:36:55 np0005536586 ceph-mon[74659]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/201017770' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:36:55 np0005536586 systemd[1]: libpod-3acb116bcd8d26a79e998f07c06dd8b7b6f4b7b03dd7d8bf29a9e8544a8a313d.scope: Deactivated successfully.
Nov 26 07:36:55 np0005536586 podman[74798]: 2025-11-26 12:36:55.232230994 +0000 UTC m=+0.406099758 container died 3acb116bcd8d26a79e998f07c06dd8b7b6f4b7b03dd7d8bf29a9e8544a8a313d (image=quay.io/ceph/ceph:v18, name=vibrant_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:36:55 np0005536586 systemd[1]: var-lib-containers-storage-overlay-476ddfe87045a6536ba27bb0746362643d20d3452b98e22e85ca253d1f458492-merged.mount: Deactivated successfully.
Nov 26 07:36:55 np0005536586 podman[74798]: 2025-11-26 12:36:55.253284837 +0000 UTC m=+0.427153590 container remove 3acb116bcd8d26a79e998f07c06dd8b7b6f4b7b03dd7d8bf29a9e8544a8a313d (image=quay.io/ceph/ceph:v18, name=vibrant_grothendieck, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 26 07:36:55 np0005536586 systemd[1]: libpod-conmon-3acb116bcd8d26a79e998f07c06dd8b7b6f4b7b03dd7d8bf29a9e8544a8a313d.scope: Deactivated successfully.
Nov 26 07:36:55 np0005536586 systemd[1]: Stopping Ceph mon.compute-0 for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 07:36:55 np0005536586 ceph-mon[74659]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 26 07:36:55 np0005536586 ceph-mon[74659]: mon.compute-0@0(leader) e1 shutdown
Nov 26 07:36:55 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0[74655]: 2025-11-26T12:36:55.377+0000 7fc184ea7640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 26 07:36:55 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0[74655]: 2025-11-26T12:36:55.377+0000 7fc184ea7640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 26 07:36:55 np0005536586 ceph-mon[74659]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 26 07:36:55 np0005536586 ceph-mon[74659]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 26 07:36:55 np0005536586 podman[74869]: 2025-11-26 12:36:55.571916672 +0000 UTC m=+0.216745707 container died dbc7bfa56c05965b50c5f72b9ecc884eef99bde2350df7b1e35e6cb0197d6d6e (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 07:36:55 np0005536586 systemd[1]: var-lib-containers-storage-overlay-d8ddbe35422285337201c75b341abbc4f716cb469c9e55edb3b7035d51f06188-merged.mount: Deactivated successfully.
Nov 26 07:36:55 np0005536586 podman[74869]: 2025-11-26 12:36:55.588343526 +0000 UTC m=+0.233172562 container remove dbc7bfa56c05965b50c5f72b9ecc884eef99bde2350df7b1e35e6cb0197d6d6e (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:36:55 np0005536586 bash[74869]: ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0
Nov 26 07:36:55 np0005536586 systemd[1]: ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@mon.compute-0.service: Deactivated successfully.
Nov 26 07:36:55 np0005536586 systemd[1]: Stopped Ceph mon.compute-0 for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 07:36:55 np0005536586 systemd[1]: Starting Ceph mon.compute-0 for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 07:36:55 np0005536586 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 07:36:55 np0005536586 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 07:36:55 np0005536586 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 07:36:55 np0005536586 podman[74949]: 2025-11-26 12:36:55.825626662 +0000 UTC m=+0.026159975 container create ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:36:55 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dedfc9236bab9bbf24c03bcf7160738704e686ab3e0d14bb389ebbc17c094ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:55 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dedfc9236bab9bbf24c03bcf7160738704e686ab3e0d14bb389ebbc17c094ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:55 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dedfc9236bab9bbf24c03bcf7160738704e686ab3e0d14bb389ebbc17c094ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:55 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dedfc9236bab9bbf24c03bcf7160738704e686ab3e0d14bb389ebbc17c094ed/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:55 np0005536586 podman[74949]: 2025-11-26 12:36:55.868181241 +0000 UTC m=+0.068714564 container init ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Nov 26 07:36:55 np0005536586 podman[74949]: 2025-11-26 12:36:55.874377205 +0000 UTC m=+0.074910519 container start ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 07:36:55 np0005536586 bash[74949]: ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537
Nov 26 07:36:55 np0005536586 podman[74949]: 2025-11-26 12:36:55.815115766 +0000 UTC m=+0.015649099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:36:55 np0005536586 systemd[1]: Started Ceph mon.compute-0 for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: pidfile_write: ignore empty --pid-file
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: load: jerasure load: lrc 
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: RocksDB version: 7.9.2
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Git sha 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: DB SUMMARY
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: DB Session ID:  S468WH7D6IL73VDKE1V5
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: CURRENT file:  CURRENT
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 54266 ; 
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                         Options.error_if_exists: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                       Options.create_if_missing: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                                     Options.env: 0x560bce926c40
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                                Options.info_log: 0x560bd0ea3040
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                              Options.statistics: (nil)
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                               Options.use_fsync: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                              Options.db_log_dir: 
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                                 Options.wal_dir: 
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                    Options.write_buffer_manager: 0x560bd0eb2b40
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                  Options.unordered_write: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                               Options.row_cache: None
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                              Options.wal_filter: None
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.two_write_queues: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.wal_compression: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.atomic_flush: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.max_background_jobs: 2
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.max_background_compactions: -1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.max_subcompactions: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.max_total_wal_size: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                          Options.max_open_files: -1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:       Options.compaction_readahead_size: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Compression algorithms supported:
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: #011kZSTD supported: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: #011kXpressCompression supported: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: #011kBZip2Compression supported: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: #011kLZ4Compression supported: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: #011kZlibCompression supported: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: #011kSnappyCompression supported: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:           Options.merge_operator: 
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:        Options.compaction_filter: None
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560bd0ea2c40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560bd0e9b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:        Options.write_buffer_size: 33554432
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:  Options.max_write_buffer_number: 2
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:          Options.compression: NoCompression
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.num_levels: 7
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 360f285c-8dc8-4f98-b8a2-efdebada3f64
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160615908587, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160615909926, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 53966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 131, "table_properties": {"data_size": 52525, "index_size": 147, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2994, "raw_average_key_size": 30, "raw_value_size": 50172, "raw_average_value_size": 511, "num_data_blocks": 7, "num_entries": 98, "num_filter_entries": 98, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160615, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160615910005, "job": 1, "event": "recovery_finished"}
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560bd0ec4e00
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: DB pointer 0x560bd0f4e000
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   54.60 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     48.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Sum      2/0   54.60 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     48.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     48.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     48.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 6.99 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 6.99 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560bd0e9b1f0#2 capacity: 512.00 MB usage: 0.77 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.34 KB,6.55651e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: mon.compute-0@-1(???) e1 preinit fsid f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: mon.compute-0@-1(???).mds e1 new map
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : fsmap 
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 26 07:36:55 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 26 07:36:55 np0005536586 podman[74967]: 2025-11-26 12:36:55.921007014 +0000 UTC m=+0.027659613 container create fc5f1da25af723c92236300b3013694972fa2365860ba93f12e4848ed5834933 (image=quay.io/ceph/ceph:v18, name=determined_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:36:55 np0005536586 systemd[1]: Started libpod-conmon-fc5f1da25af723c92236300b3013694972fa2365860ba93f12e4848ed5834933.scope.
Nov 26 07:36:55 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:36:55 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2c947532bf5a3da7eb2c6daa68d695883687b89669fd83cd5a891aba607786/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:55 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2c947532bf5a3da7eb2c6daa68d695883687b89669fd83cd5a891aba607786/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:55 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2c947532bf5a3da7eb2c6daa68d695883687b89669fd83cd5a891aba607786/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:55 np0005536586 podman[74967]: 2025-11-26 12:36:55.976690009 +0000 UTC m=+0.083342599 container init fc5f1da25af723c92236300b3013694972fa2365860ba93f12e4848ed5834933 (image=quay.io/ceph/ceph:v18, name=determined_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:36:55 np0005536586 podman[74967]: 2025-11-26 12:36:55.982096317 +0000 UTC m=+0.088748906 container start fc5f1da25af723c92236300b3013694972fa2365860ba93f12e4848ed5834933 (image=quay.io/ceph/ceph:v18, name=determined_wescoff, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 07:36:55 np0005536586 podman[74967]: 2025-11-26 12:36:55.983635628 +0000 UTC m=+0.090288237 container attach fc5f1da25af723c92236300b3013694972fa2365860ba93f12e4848ed5834933 (image=quay.io/ceph/ceph:v18, name=determined_wescoff, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:36:56 np0005536586 podman[74967]: 2025-11-26 12:36:55.90995148 +0000 UTC m=+0.016604089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:36:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 26 07:36:56 np0005536586 systemd[1]: libpod-fc5f1da25af723c92236300b3013694972fa2365860ba93f12e4848ed5834933.scope: Deactivated successfully.
Nov 26 07:36:56 np0005536586 podman[74967]: 2025-11-26 12:36:56.313942694 +0000 UTC m=+0.420595283 container died fc5f1da25af723c92236300b3013694972fa2365860ba93f12e4848ed5834933 (image=quay.io/ceph/ceph:v18, name=determined_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:36:56 np0005536586 podman[74967]: 2025-11-26 12:36:56.335680104 +0000 UTC m=+0.442332694 container remove fc5f1da25af723c92236300b3013694972fa2365860ba93f12e4848ed5834933 (image=quay.io/ceph/ceph:v18, name=determined_wescoff, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 07:36:56 np0005536586 systemd[1]: libpod-conmon-fc5f1da25af723c92236300b3013694972fa2365860ba93f12e4848ed5834933.scope: Deactivated successfully.
Nov 26 07:36:56 np0005536586 podman[75054]: 2025-11-26 12:36:56.376769712 +0000 UTC m=+0.026639419 container create 74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864 (image=quay.io/ceph/ceph:v18, name=stupefied_almeida, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:36:56 np0005536586 systemd[1]: Started libpod-conmon-74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864.scope.
Nov 26 07:36:56 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:36:56 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48dc10a4ef776a7ee2a65386f0771c11559c8bcdbd71fc2b424a4bbdd0082b1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:56 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48dc10a4ef776a7ee2a65386f0771c11559c8bcdbd71fc2b424a4bbdd0082b1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:56 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48dc10a4ef776a7ee2a65386f0771c11559c8bcdbd71fc2b424a4bbdd0082b1a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:56 np0005536586 podman[75054]: 2025-11-26 12:36:56.4291263 +0000 UTC m=+0.078996017 container init 74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864 (image=quay.io/ceph/ceph:v18, name=stupefied_almeida, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 07:36:56 np0005536586 podman[75054]: 2025-11-26 12:36:56.433386438 +0000 UTC m=+0.083256135 container start 74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864 (image=quay.io/ceph/ceph:v18, name=stupefied_almeida, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:36:56 np0005536586 podman[75054]: 2025-11-26 12:36:56.434779874 +0000 UTC m=+0.084649591 container attach 74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864 (image=quay.io/ceph/ceph:v18, name=stupefied_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:36:56 np0005536586 podman[75054]: 2025-11-26 12:36:56.366156203 +0000 UTC m=+0.016025920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:36:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 26 07:36:56 np0005536586 systemd[1]: libpod-74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864.scope: Deactivated successfully.
Nov 26 07:36:56 np0005536586 conmon[75069]: conmon 74422ccac611b554fca8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864.scope/container/memory.events
Nov 26 07:36:56 np0005536586 podman[75054]: 2025-11-26 12:36:56.764864581 +0000 UTC m=+0.414734277 container died 74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864 (image=quay.io/ceph/ceph:v18, name=stupefied_almeida, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 07:36:56 np0005536586 systemd[1]: var-lib-containers-storage-overlay-48dc10a4ef776a7ee2a65386f0771c11559c8bcdbd71fc2b424a4bbdd0082b1a-merged.mount: Deactivated successfully.
Nov 26 07:36:56 np0005536586 podman[75054]: 2025-11-26 12:36:56.785202525 +0000 UTC m=+0.435072222 container remove 74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864 (image=quay.io/ceph/ceph:v18, name=stupefied_almeida, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:36:56 np0005536586 systemd[1]: libpod-conmon-74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864.scope: Deactivated successfully.
Nov 26 07:36:56 np0005536586 systemd[1]: Reloading.
Nov 26 07:36:56 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:36:56 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:36:56 np0005536586 ceph-mon[74966]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 07:36:57 np0005536586 systemd[1]: Reloading.
Nov 26 07:36:57 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:36:57 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:36:57 np0005536586 systemd[1]: Starting Ceph mgr.compute-0.whkbdn for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 07:36:57 np0005536586 podman[75220]: 2025-11-26 12:36:57.353059008 +0000 UTC m=+0.027097903 container create c06d21624ca8869dd82756bdbc9957ce848a0aa0b6a72b8cb547377849a6a817 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 07:36:57 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8eae1bd290cade33e00ffe53834b366c548e11123a8a82238aa7c5d798c68d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:57 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8eae1bd290cade33e00ffe53834b366c548e11123a8a82238aa7c5d798c68d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:57 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8eae1bd290cade33e00ffe53834b366c548e11123a8a82238aa7c5d798c68d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:57 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8eae1bd290cade33e00ffe53834b366c548e11123a8a82238aa7c5d798c68d4/merged/var/lib/ceph/mgr/ceph-compute-0.whkbdn supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:57 np0005536586 podman[75220]: 2025-11-26 12:36:57.391054315 +0000 UTC m=+0.065093220 container init c06d21624ca8869dd82756bdbc9957ce848a0aa0b6a72b8cb547377849a6a817 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 07:36:57 np0005536586 podman[75220]: 2025-11-26 12:36:57.394927533 +0000 UTC m=+0.068966428 container start c06d21624ca8869dd82756bdbc9957ce848a0aa0b6a72b8cb547377849a6a817 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 07:36:57 np0005536586 bash[75220]: c06d21624ca8869dd82756bdbc9957ce848a0aa0b6a72b8cb547377849a6a817
Nov 26 07:36:57 np0005536586 podman[75220]: 2025-11-26 12:36:57.341230639 +0000 UTC m=+0.015269545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:36:57 np0005536586 systemd[1]: Started Ceph mgr.compute-0.whkbdn for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 07:36:57 np0005536586 ceph-mgr[75236]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 07:36:57 np0005536586 ceph-mgr[75236]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 26 07:36:57 np0005536586 ceph-mgr[75236]: pidfile_write: ignore empty --pid-file
Nov 26 07:36:57 np0005536586 podman[75237]: 2025-11-26 12:36:57.444915909 +0000 UTC m=+0.028053694 container create 7bbde913a86ae6194145ce9f3311f706d3113d4f6d050c80008ee7a677ee8f5b (image=quay.io/ceph/ceph:v18, name=sad_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:36:57 np0005536586 systemd[1]: Started libpod-conmon-7bbde913a86ae6194145ce9f3311f706d3113d4f6d050c80008ee7a677ee8f5b.scope.
Nov 26 07:36:57 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:36:57 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df39d75f067f561eff3c88876ca95edbcc9d00fb921b628e291e11d799d63ecf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:57 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df39d75f067f561eff3c88876ca95edbcc9d00fb921b628e291e11d799d63ecf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:57 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df39d75f067f561eff3c88876ca95edbcc9d00fb921b628e291e11d799d63ecf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:57 np0005536586 podman[75237]: 2025-11-26 12:36:57.500784285 +0000 UTC m=+0.083922071 container init 7bbde913a86ae6194145ce9f3311f706d3113d4f6d050c80008ee7a677ee8f5b (image=quay.io/ceph/ceph:v18, name=sad_cerf, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 07:36:57 np0005536586 podman[75237]: 2025-11-26 12:36:57.506091896 +0000 UTC m=+0.089229682 container start 7bbde913a86ae6194145ce9f3311f706d3113d4f6d050c80008ee7a677ee8f5b (image=quay.io/ceph/ceph:v18, name=sad_cerf, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 07:36:57 np0005536586 podman[75237]: 2025-11-26 12:36:57.507184836 +0000 UTC m=+0.090322621 container attach 7bbde913a86ae6194145ce9f3311f706d3113d4f6d050c80008ee7a677ee8f5b (image=quay.io/ceph/ceph:v18, name=sad_cerf, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:36:57 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'alerts'
Nov 26 07:36:57 np0005536586 podman[75237]: 2025-11-26 12:36:57.434364216 +0000 UTC m=+0.017502021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:36:57 np0005536586 ceph-mgr[75236]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 07:36:57 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'balancer'
Nov 26 07:36:57 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:36:57.784+0000 7f954fa56140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 07:36:57 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 07:36:57 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/100115608' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 07:36:57 np0005536586 sad_cerf[75274]: 
Nov 26 07:36:57 np0005536586 sad_cerf[75274]: {
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    "fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    "health": {
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "status": "HEALTH_OK",
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "checks": {},
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "mutes": []
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    },
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    "election_epoch": 5,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    "quorum": [
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        0
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    ],
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    "quorum_names": [
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "compute-0"
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    ],
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    "quorum_age": 1,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    "monmap": {
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "epoch": 1,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "min_mon_release_name": "reef",
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "num_mons": 1
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    },
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    "osdmap": {
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "epoch": 1,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "num_osds": 0,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "num_up_osds": 0,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "osd_up_since": 0,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "num_in_osds": 0,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "osd_in_since": 0,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "num_remapped_pgs": 0
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    },
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    "pgmap": {
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "pgs_by_state": [],
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "num_pgs": 0,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "num_pools": 0,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "num_objects": 0,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "data_bytes": 0,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "bytes_used": 0,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "bytes_avail": 0,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "bytes_total": 0
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    },
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    "fsmap": {
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "epoch": 1,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "by_rank": [],
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "up:standby": 0
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    },
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    "mgrmap": {
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "available": false,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "num_standbys": 0,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "modules": [
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:            "iostat",
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:            "nfs",
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:            "restful"
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        ],
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "services": {}
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    },
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    "servicemap": {
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "epoch": 1,
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "modified": "2025-11-26T12:36:53.922147+0000",
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:        "services": {}
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    },
Nov 26 07:36:57 np0005536586 sad_cerf[75274]:    "progress_events": {}
Nov 26 07:36:57 np0005536586 sad_cerf[75274]: }
Nov 26 07:36:57 np0005536586 systemd[1]: libpod-7bbde913a86ae6194145ce9f3311f706d3113d4f6d050c80008ee7a677ee8f5b.scope: Deactivated successfully.
Nov 26 07:36:57 np0005536586 podman[75237]: 2025-11-26 12:36:57.831739412 +0000 UTC m=+0.414877197 container died 7bbde913a86ae6194145ce9f3311f706d3113d4f6d050c80008ee7a677ee8f5b (image=quay.io/ceph/ceph:v18, name=sad_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:36:57 np0005536586 systemd[1]: var-lib-containers-storage-overlay-df39d75f067f561eff3c88876ca95edbcc9d00fb921b628e291e11d799d63ecf-merged.mount: Deactivated successfully.
Nov 26 07:36:57 np0005536586 podman[75237]: 2025-11-26 12:36:57.861737379 +0000 UTC m=+0.444875164 container remove 7bbde913a86ae6194145ce9f3311f706d3113d4f6d050c80008ee7a677ee8f5b (image=quay.io/ceph/ceph:v18, name=sad_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:36:57 np0005536586 systemd[1]: libpod-conmon-7bbde913a86ae6194145ce9f3311f706d3113d4f6d050c80008ee7a677ee8f5b.scope: Deactivated successfully.
Nov 26 07:36:58 np0005536586 ceph-mgr[75236]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 07:36:58 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'cephadm'
Nov 26 07:36:58 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:36:58.016+0000 7f954fa56140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 07:36:59 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'crash'
Nov 26 07:36:59 np0005536586 ceph-mgr[75236]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 07:36:59 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'dashboard'
Nov 26 07:36:59 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:36:59.875+0000 7f954fa56140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 07:36:59 np0005536586 podman[75321]: 2025-11-26 12:36:59.90727008 +0000 UTC m=+0.027975739 container create 2716cee07ddd5dba14b071caded2838f006f88de46e90899fa714b7ff728c723 (image=quay.io/ceph/ceph:v18, name=gracious_elbakyan, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 07:36:59 np0005536586 systemd[1]: Started libpod-conmon-2716cee07ddd5dba14b071caded2838f006f88de46e90899fa714b7ff728c723.scope.
Nov 26 07:36:59 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:36:59 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6087f78a74b64b90b02ea7deb50a8ff37298e1b19b79d6298ef56e877a7fbbb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:59 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6087f78a74b64b90b02ea7deb50a8ff37298e1b19b79d6298ef56e877a7fbbb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:59 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6087f78a74b64b90b02ea7deb50a8ff37298e1b19b79d6298ef56e877a7fbbb7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:36:59 np0005536586 podman[75321]: 2025-11-26 12:36:59.954351848 +0000 UTC m=+0.075057507 container init 2716cee07ddd5dba14b071caded2838f006f88de46e90899fa714b7ff728c723 (image=quay.io/ceph/ceph:v18, name=gracious_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 07:36:59 np0005536586 podman[75321]: 2025-11-26 12:36:59.959533603 +0000 UTC m=+0.080239261 container start 2716cee07ddd5dba14b071caded2838f006f88de46e90899fa714b7ff728c723 (image=quay.io/ceph/ceph:v18, name=gracious_elbakyan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 07:36:59 np0005536586 podman[75321]: 2025-11-26 12:36:59.963826422 +0000 UTC m=+0.084532100 container attach 2716cee07ddd5dba14b071caded2838f006f88de46e90899fa714b7ff728c723 (image=quay.io/ceph/ceph:v18, name=gracious_elbakyan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:36:59 np0005536586 podman[75321]: 2025-11-26 12:36:59.895444526 +0000 UTC m=+0.016150194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 07:37:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3426630928' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]: 
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]: {
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    "fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    "health": {
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "status": "HEALTH_OK",
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "checks": {},
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "mutes": []
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    },
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    "election_epoch": 5,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    "quorum": [
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        0
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    ],
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    "quorum_names": [
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "compute-0"
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    ],
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    "quorum_age": 4,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    "monmap": {
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "epoch": 1,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "min_mon_release_name": "reef",
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "num_mons": 1
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    },
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    "osdmap": {
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "epoch": 1,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "num_osds": 0,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "num_up_osds": 0,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "osd_up_since": 0,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "num_in_osds": 0,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "osd_in_since": 0,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "num_remapped_pgs": 0
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    },
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    "pgmap": {
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "pgs_by_state": [],
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "num_pgs": 0,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "num_pools": 0,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "num_objects": 0,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "data_bytes": 0,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "bytes_used": 0,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "bytes_avail": 0,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "bytes_total": 0
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    },
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    "fsmap": {
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "epoch": 1,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "by_rank": [],
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "up:standby": 0
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    },
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    "mgrmap": {
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "available": false,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "num_standbys": 0,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "modules": [
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:            "iostat",
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:            "nfs",
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:            "restful"
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        ],
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "services": {}
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    },
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    "servicemap": {
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "epoch": 1,
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "modified": "2025-11-26T12:36:53.922147+0000",
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:        "services": {}
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    },
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]:    "progress_events": {}
Nov 26 07:37:00 np0005536586 gracious_elbakyan[75334]: }
Nov 26 07:37:00 np0005536586 systemd[1]: libpod-2716cee07ddd5dba14b071caded2838f006f88de46e90899fa714b7ff728c723.scope: Deactivated successfully.
Nov 26 07:37:00 np0005536586 podman[75321]: 2025-11-26 12:37:00.28032621 +0000 UTC m=+0.401031868 container died 2716cee07ddd5dba14b071caded2838f006f88de46e90899fa714b7ff728c723 (image=quay.io/ceph/ceph:v18, name=gracious_elbakyan, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 07:37:00 np0005536586 systemd[1]: var-lib-containers-storage-overlay-6087f78a74b64b90b02ea7deb50a8ff37298e1b19b79d6298ef56e877a7fbbb7-merged.mount: Deactivated successfully.
Nov 26 07:37:00 np0005536586 podman[75321]: 2025-11-26 12:37:00.304139172 +0000 UTC m=+0.424844830 container remove 2716cee07ddd5dba14b071caded2838f006f88de46e90899fa714b7ff728c723 (image=quay.io/ceph/ceph:v18, name=gracious_elbakyan, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:37:00 np0005536586 systemd[1]: libpod-conmon-2716cee07ddd5dba14b071caded2838f006f88de46e90899fa714b7ff728c723.scope: Deactivated successfully.
Nov 26 07:37:01 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'devicehealth'
Nov 26 07:37:01 np0005536586 ceph-mgr[75236]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 26 07:37:01 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'diskprediction_local'
Nov 26 07:37:01 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:01.308+0000 7f954fa56140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 26 07:37:01 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 26 07:37:01 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 26 07:37:01 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]:  from numpy import show_config as show_numpy_config
Nov 26 07:37:01 np0005536586 ceph-mgr[75236]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 26 07:37:01 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'influx'
Nov 26 07:37:01 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:01.766+0000 7f954fa56140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 26 07:37:01 np0005536586 ceph-mgr[75236]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 26 07:37:01 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'insights'
Nov 26 07:37:01 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:01.974+0000 7f954fa56140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 26 07:37:02 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'iostat'
Nov 26 07:37:02 np0005536586 podman[75371]: 2025-11-26 12:37:02.34640125 +0000 UTC m=+0.026399227 container create 3c069ec8e94d232b4780282cb8916847c538116cbf1c443162d6aad54d983352 (image=quay.io/ceph/ceph:v18, name=gifted_mccarthy, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:37:02 np0005536586 systemd[1]: Started libpod-conmon-3c069ec8e94d232b4780282cb8916847c538116cbf1c443162d6aad54d983352.scope.
Nov 26 07:37:02 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:02 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8cd5fda00974aaf9948863ef66d89eca722f9dd22e8b4805031b1fededbe0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:02 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8cd5fda00974aaf9948863ef66d89eca722f9dd22e8b4805031b1fededbe0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:02 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8cd5fda00974aaf9948863ef66d89eca722f9dd22e8b4805031b1fededbe0e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:02 np0005536586 ceph-mgr[75236]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 26 07:37:02 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'k8sevents'
Nov 26 07:37:02 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:02.391+0000 7f954fa56140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 26 07:37:02 np0005536586 podman[75371]: 2025-11-26 12:37:02.396021763 +0000 UTC m=+0.076019740 container init 3c069ec8e94d232b4780282cb8916847c538116cbf1c443162d6aad54d983352 (image=quay.io/ceph/ceph:v18, name=gifted_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 26 07:37:02 np0005536586 podman[75371]: 2025-11-26 12:37:02.400396407 +0000 UTC m=+0.080394414 container start 3c069ec8e94d232b4780282cb8916847c538116cbf1c443162d6aad54d983352 (image=quay.io/ceph/ceph:v18, name=gifted_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:02 np0005536586 podman[75371]: 2025-11-26 12:37:02.401458016 +0000 UTC m=+0.081455994 container attach 3c069ec8e94d232b4780282cb8916847c538116cbf1c443162d6aad54d983352 (image=quay.io/ceph/ceph:v18, name=gifted_mccarthy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 07:37:02 np0005536586 podman[75371]: 2025-11-26 12:37:02.33538957 +0000 UTC m=+0.015387567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:02 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 07:37:02 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/654192452' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]: 
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]: {
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    "fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    "health": {
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "status": "HEALTH_OK",
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "checks": {},
Nov 26 07:37:02 np0005536586 chronyd[58583]: Selected source 104.131.155.175 (pool.ntp.org)
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "mutes": []
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    },
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    "election_epoch": 5,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    "quorum": [
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        0
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    ],
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    "quorum_names": [
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "compute-0"
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    ],
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    "quorum_age": 6,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    "monmap": {
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "epoch": 1,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "min_mon_release_name": "reef",
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "num_mons": 1
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    },
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    "osdmap": {
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "epoch": 1,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "num_osds": 0,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "num_up_osds": 0,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "osd_up_since": 0,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "num_in_osds": 0,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "osd_in_since": 0,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "num_remapped_pgs": 0
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    },
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    "pgmap": {
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "pgs_by_state": [],
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "num_pgs": 0,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "num_pools": 0,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "num_objects": 0,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "data_bytes": 0,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "bytes_used": 0,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "bytes_avail": 0,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "bytes_total": 0
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    },
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    "fsmap": {
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "epoch": 1,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "by_rank": [],
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "up:standby": 0
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    },
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    "mgrmap": {
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "available": false,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "num_standbys": 0,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "modules": [
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:            "iostat",
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:            "nfs",
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:            "restful"
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        ],
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "services": {}
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    },
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    "servicemap": {
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "epoch": 1,
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "modified": "2025-11-26T12:36:53.922147+0000",
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:        "services": {}
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    },
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]:    "progress_events": {}
Nov 26 07:37:02 np0005536586 gifted_mccarthy[75383]: }
Nov 26 07:37:02 np0005536586 systemd[1]: libpod-3c069ec8e94d232b4780282cb8916847c538116cbf1c443162d6aad54d983352.scope: Deactivated successfully.
Nov 26 07:37:02 np0005536586 podman[75371]: 2025-11-26 12:37:02.725594102 +0000 UTC m=+0.405592080 container died 3c069ec8e94d232b4780282cb8916847c538116cbf1c443162d6aad54d983352 (image=quay.io/ceph/ceph:v18, name=gifted_mccarthy, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:37:02 np0005536586 systemd[1]: var-lib-containers-storage-overlay-bb8cd5fda00974aaf9948863ef66d89eca722f9dd22e8b4805031b1fededbe0e-merged.mount: Deactivated successfully.
Nov 26 07:37:02 np0005536586 podman[75371]: 2025-11-26 12:37:02.756566091 +0000 UTC m=+0.436564068 container remove 3c069ec8e94d232b4780282cb8916847c538116cbf1c443162d6aad54d983352 (image=quay.io/ceph/ceph:v18, name=gifted_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 07:37:02 np0005536586 systemd[1]: libpod-conmon-3c069ec8e94d232b4780282cb8916847c538116cbf1c443162d6aad54d983352.scope: Deactivated successfully.
Nov 26 07:37:03 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'localpool'
Nov 26 07:37:04 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'mds_autoscaler'
Nov 26 07:37:04 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'mirroring'
Nov 26 07:37:04 np0005536586 podman[75418]: 2025-11-26 12:37:04.799540045 +0000 UTC m=+0.025582277 container create e0411e8ebb9eb1531bbd8dc0eb8d6c4ddac9ed03ef4677605b5c5c6868754ffb (image=quay.io/ceph/ceph:v18, name=upbeat_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:37:04 np0005536586 systemd[1]: Started libpod-conmon-e0411e8ebb9eb1531bbd8dc0eb8d6c4ddac9ed03ef4677605b5c5c6868754ffb.scope.
Nov 26 07:37:04 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:04 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac0bf6284ebb8ce98924b55caa0590ef0756d3dc72d2f90f9a3814fcac4dc141/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:04 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac0bf6284ebb8ce98924b55caa0590ef0756d3dc72d2f90f9a3814fcac4dc141/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:04 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac0bf6284ebb8ce98924b55caa0590ef0756d3dc72d2f90f9a3814fcac4dc141/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:04 np0005536586 podman[75418]: 2025-11-26 12:37:04.843604028 +0000 UTC m=+0.069646279 container init e0411e8ebb9eb1531bbd8dc0eb8d6c4ddac9ed03ef4677605b5c5c6868754ffb (image=quay.io/ceph/ceph:v18, name=upbeat_brattain, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 07:37:04 np0005536586 podman[75418]: 2025-11-26 12:37:04.848068982 +0000 UTC m=+0.074111212 container start e0411e8ebb9eb1531bbd8dc0eb8d6c4ddac9ed03ef4677605b5c5c6868754ffb (image=quay.io/ceph/ceph:v18, name=upbeat_brattain, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 07:37:04 np0005536586 podman[75418]: 2025-11-26 12:37:04.84978714 +0000 UTC m=+0.075829381 container attach e0411e8ebb9eb1531bbd8dc0eb8d6c4ddac9ed03ef4677605b5c5c6868754ffb (image=quay.io/ceph/ceph:v18, name=upbeat_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 07:37:04 np0005536586 podman[75418]: 2025-11-26 12:37:04.789110664 +0000 UTC m=+0.015152915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:04 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'nfs'
Nov 26 07:37:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 07:37:05 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3521650800' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]: 
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]: {
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    "fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    "health": {
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "status": "HEALTH_OK",
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "checks": {},
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "mutes": []
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    },
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    "election_epoch": 5,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    "quorum": [
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        0
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    ],
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    "quorum_names": [
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "compute-0"
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    ],
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    "quorum_age": 9,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    "monmap": {
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "epoch": 1,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "min_mon_release_name": "reef",
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "num_mons": 1
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    },
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    "osdmap": {
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "epoch": 1,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "num_osds": 0,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "num_up_osds": 0,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "osd_up_since": 0,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "num_in_osds": 0,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "osd_in_since": 0,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "num_remapped_pgs": 0
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    },
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    "pgmap": {
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "pgs_by_state": [],
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "num_pgs": 0,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "num_pools": 0,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "num_objects": 0,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "data_bytes": 0,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "bytes_used": 0,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "bytes_avail": 0,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "bytes_total": 0
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    },
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    "fsmap": {
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "epoch": 1,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "by_rank": [],
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "up:standby": 0
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    },
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    "mgrmap": {
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "available": false,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "num_standbys": 0,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "modules": [
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:            "iostat",
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:            "nfs",
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:            "restful"
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        ],
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "services": {}
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    },
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    "servicemap": {
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "epoch": 1,
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "modified": "2025-11-26T12:36:53.922147+0000",
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:        "services": {}
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    },
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]:    "progress_events": {}
Nov 26 07:37:05 np0005536586 upbeat_brattain[75432]: }
Nov 26 07:37:05 np0005536586 systemd[1]: libpod-e0411e8ebb9eb1531bbd8dc0eb8d6c4ddac9ed03ef4677605b5c5c6868754ffb.scope: Deactivated successfully.
Nov 26 07:37:05 np0005536586 podman[75418]: 2025-11-26 12:37:05.174366382 +0000 UTC m=+0.400408613 container died e0411e8ebb9eb1531bbd8dc0eb8d6c4ddac9ed03ef4677605b5c5c6868754ffb (image=quay.io/ceph/ceph:v18, name=upbeat_brattain, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:05 np0005536586 systemd[1]: var-lib-containers-storage-overlay-ac0bf6284ebb8ce98924b55caa0590ef0756d3dc72d2f90f9a3814fcac4dc141-merged.mount: Deactivated successfully.
Nov 26 07:37:05 np0005536586 podman[75418]: 2025-11-26 12:37:05.195588251 +0000 UTC m=+0.421630483 container remove e0411e8ebb9eb1531bbd8dc0eb8d6c4ddac9ed03ef4677605b5c5c6868754ffb (image=quay.io/ceph/ceph:v18, name=upbeat_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 07:37:05 np0005536586 systemd[1]: libpod-conmon-e0411e8ebb9eb1531bbd8dc0eb8d6c4ddac9ed03ef4677605b5c5c6868754ffb.scope: Deactivated successfully.
Nov 26 07:37:05 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:05.520+0000 7f954fa56140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 26 07:37:05 np0005536586 ceph-mgr[75236]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 26 07:37:05 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'orchestrator'
Nov 26 07:37:06 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:06.097+0000 7f954fa56140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 26 07:37:06 np0005536586 ceph-mgr[75236]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 26 07:37:06 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'osd_perf_query'
Nov 26 07:37:06 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:06.330+0000 7f954fa56140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 26 07:37:06 np0005536586 ceph-mgr[75236]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 26 07:37:06 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'osd_support'
Nov 26 07:37:06 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:06.536+0000 7f954fa56140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 26 07:37:06 np0005536586 ceph-mgr[75236]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 26 07:37:06 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'pg_autoscaler'
Nov 26 07:37:06 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:06.773+0000 7f954fa56140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 26 07:37:06 np0005536586 ceph-mgr[75236]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 26 07:37:06 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'progress'
Nov 26 07:37:06 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:06.981+0000 7f954fa56140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 26 07:37:06 np0005536586 ceph-mgr[75236]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 26 07:37:06 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'prometheus'
Nov 26 07:37:07 np0005536586 podman[75468]: 2025-11-26 12:37:07.237710246 +0000 UTC m=+0.026295702 container create 416f31ec3fe96526cc4685594c9773f31292ae36541ed147cf0cd1fb6de8c9af (image=quay.io/ceph/ceph:v18, name=gallant_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 07:37:07 np0005536586 systemd[1]: Started libpod-conmon-416f31ec3fe96526cc4685594c9773f31292ae36541ed147cf0cd1fb6de8c9af.scope.
Nov 26 07:37:07 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:07 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c0c7388464e9c24bb45701e3d906ba3471b6f53ac2156485f6686113e485d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:07 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c0c7388464e9c24bb45701e3d906ba3471b6f53ac2156485f6686113e485d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:07 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c0c7388464e9c24bb45701e3d906ba3471b6f53ac2156485f6686113e485d8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:07 np0005536586 podman[75468]: 2025-11-26 12:37:07.281827148 +0000 UTC m=+0.070412614 container init 416f31ec3fe96526cc4685594c9773f31292ae36541ed147cf0cd1fb6de8c9af (image=quay.io/ceph/ceph:v18, name=gallant_maxwell, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 07:37:07 np0005536586 podman[75468]: 2025-11-26 12:37:07.28658858 +0000 UTC m=+0.075174035 container start 416f31ec3fe96526cc4685594c9773f31292ae36541ed147cf0cd1fb6de8c9af (image=quay.io/ceph/ceph:v18, name=gallant_maxwell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 26 07:37:07 np0005536586 podman[75468]: 2025-11-26 12:37:07.288821708 +0000 UTC m=+0.077407164 container attach 416f31ec3fe96526cc4685594c9773f31292ae36541ed147cf0cd1fb6de8c9af (image=quay.io/ceph/ceph:v18, name=gallant_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 07:37:07 np0005536586 podman[75468]: 2025-11-26 12:37:07.227065086 +0000 UTC m=+0.015650563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:07 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 07:37:07 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/788382095' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]: 
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]: {
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    "fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    "health": {
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "status": "HEALTH_OK",
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "checks": {},
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "mutes": []
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    },
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    "election_epoch": 5,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    "quorum": [
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        0
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    ],
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    "quorum_names": [
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "compute-0"
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    ],
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    "quorum_age": 11,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    "monmap": {
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "epoch": 1,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "min_mon_release_name": "reef",
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "num_mons": 1
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    },
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    "osdmap": {
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "epoch": 1,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "num_osds": 0,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "num_up_osds": 0,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "osd_up_since": 0,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "num_in_osds": 0,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "osd_in_since": 0,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "num_remapped_pgs": 0
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    },
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    "pgmap": {
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "pgs_by_state": [],
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "num_pgs": 0,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "num_pools": 0,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "num_objects": 0,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "data_bytes": 0,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "bytes_used": 0,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "bytes_avail": 0,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "bytes_total": 0
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    },
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    "fsmap": {
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "epoch": 1,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "by_rank": [],
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "up:standby": 0
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    },
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    "mgrmap": {
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "available": false,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "num_standbys": 0,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "modules": [
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:            "iostat",
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:            "nfs",
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:            "restful"
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        ],
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "services": {}
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    },
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    "servicemap": {
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "epoch": 1,
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "modified": "2025-11-26T12:36:53.922147+0000",
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:        "services": {}
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    },
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]:    "progress_events": {}
Nov 26 07:37:07 np0005536586 gallant_maxwell[75482]: }
Nov 26 07:37:07 np0005536586 systemd[1]: libpod-416f31ec3fe96526cc4685594c9773f31292ae36541ed147cf0cd1fb6de8c9af.scope: Deactivated successfully.
Nov 26 07:37:07 np0005536586 podman[75508]: 2025-11-26 12:37:07.637445569 +0000 UTC m=+0.016011412 container died 416f31ec3fe96526cc4685594c9773f31292ae36541ed147cf0cd1fb6de8c9af (image=quay.io/ceph/ceph:v18, name=gallant_maxwell, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:37:07 np0005536586 systemd[1]: var-lib-containers-storage-overlay-51c0c7388464e9c24bb45701e3d906ba3471b6f53ac2156485f6686113e485d8-merged.mount: Deactivated successfully.
Nov 26 07:37:07 np0005536586 podman[75508]: 2025-11-26 12:37:07.658253158 +0000 UTC m=+0.036819000 container remove 416f31ec3fe96526cc4685594c9773f31292ae36541ed147cf0cd1fb6de8c9af (image=quay.io/ceph/ceph:v18, name=gallant_maxwell, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:37:07 np0005536586 systemd[1]: libpod-conmon-416f31ec3fe96526cc4685594c9773f31292ae36541ed147cf0cd1fb6de8c9af.scope: Deactivated successfully.
Nov 26 07:37:07 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:07.855+0000 7f954fa56140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 26 07:37:07 np0005536586 ceph-mgr[75236]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 26 07:37:07 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'rbd_support'
Nov 26 07:37:08 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:08.116+0000 7f954fa56140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 26 07:37:08 np0005536586 ceph-mgr[75236]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 26 07:37:08 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'restful'
Nov 26 07:37:08 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'rgw'
Nov 26 07:37:09 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:09.342+0000 7f954fa56140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 26 07:37:09 np0005536586 ceph-mgr[75236]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 26 07:37:09 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'rook'
Nov 26 07:37:09 np0005536586 podman[75520]: 2025-11-26 12:37:09.702150948 +0000 UTC m=+0.025671475 container create 47ee2f56ac07c6b62be21775291dda515489ec628e10ea5328584a9b56ad78c0 (image=quay.io/ceph/ceph:v18, name=vigorous_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 07:37:09 np0005536586 systemd[1]: Started libpod-conmon-47ee2f56ac07c6b62be21775291dda515489ec628e10ea5328584a9b56ad78c0.scope.
Nov 26 07:37:09 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:09 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314271cfff462579c94ba03f6d2235107c6b2c32f0d8ccdbf6a529e17bcae51b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:09 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314271cfff462579c94ba03f6d2235107c6b2c32f0d8ccdbf6a529e17bcae51b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:09 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314271cfff462579c94ba03f6d2235107c6b2c32f0d8ccdbf6a529e17bcae51b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:09 np0005536586 podman[75520]: 2025-11-26 12:37:09.751407225 +0000 UTC m=+0.074927762 container init 47ee2f56ac07c6b62be21775291dda515489ec628e10ea5328584a9b56ad78c0 (image=quay.io/ceph/ceph:v18, name=vigorous_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 07:37:09 np0005536586 podman[75520]: 2025-11-26 12:37:09.755219959 +0000 UTC m=+0.078740486 container start 47ee2f56ac07c6b62be21775291dda515489ec628e10ea5328584a9b56ad78c0 (image=quay.io/ceph/ceph:v18, name=vigorous_yalow, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:37:09 np0005536586 podman[75520]: 2025-11-26 12:37:09.756290326 +0000 UTC m=+0.079810853 container attach 47ee2f56ac07c6b62be21775291dda515489ec628e10ea5328584a9b56ad78c0 (image=quay.io/ceph/ceph:v18, name=vigorous_yalow, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 07:37:09 np0005536586 podman[75520]: 2025-11-26 12:37:09.691244857 +0000 UTC m=+0.014765404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:10 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 07:37:10 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3569570021' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]: 
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]: {
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    "fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    "health": {
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "status": "HEALTH_OK",
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "checks": {},
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "mutes": []
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    },
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    "election_epoch": 5,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    "quorum": [
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        0
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    ],
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    "quorum_names": [
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "compute-0"
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    ],
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    "quorum_age": 14,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    "monmap": {
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "epoch": 1,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "min_mon_release_name": "reef",
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "num_mons": 1
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    },
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    "osdmap": {
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "epoch": 1,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "num_osds": 0,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "num_up_osds": 0,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "osd_up_since": 0,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "num_in_osds": 0,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "osd_in_since": 0,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "num_remapped_pgs": 0
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    },
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    "pgmap": {
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "pgs_by_state": [],
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "num_pgs": 0,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "num_pools": 0,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "num_objects": 0,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "data_bytes": 0,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "bytes_used": 0,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "bytes_avail": 0,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "bytes_total": 0
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    },
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    "fsmap": {
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "epoch": 1,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "by_rank": [],
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "up:standby": 0
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    },
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    "mgrmap": {
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "available": false,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "num_standbys": 0,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "modules": [
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:            "iostat",
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:            "nfs",
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:            "restful"
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        ],
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "services": {}
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    },
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    "servicemap": {
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "epoch": 1,
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "modified": "2025-11-26T12:36:53.922147+0000",
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:        "services": {}
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    },
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]:    "progress_events": {}
Nov 26 07:37:10 np0005536586 vigorous_yalow[75533]: }
Nov 26 07:37:10 np0005536586 systemd[1]: libpod-47ee2f56ac07c6b62be21775291dda515489ec628e10ea5328584a9b56ad78c0.scope: Deactivated successfully.
Nov 26 07:37:10 np0005536586 podman[75520]: 2025-11-26 12:37:10.079256078 +0000 UTC m=+0.402776605 container died 47ee2f56ac07c6b62be21775291dda515489ec628e10ea5328584a9b56ad78c0 (image=quay.io/ceph/ceph:v18, name=vigorous_yalow, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 07:37:10 np0005536586 systemd[1]: var-lib-containers-storage-overlay-314271cfff462579c94ba03f6d2235107c6b2c32f0d8ccdbf6a529e17bcae51b-merged.mount: Deactivated successfully.
Nov 26 07:37:10 np0005536586 podman[75520]: 2025-11-26 12:37:10.101148441 +0000 UTC m=+0.424668968 container remove 47ee2f56ac07c6b62be21775291dda515489ec628e10ea5328584a9b56ad78c0 (image=quay.io/ceph/ceph:v18, name=vigorous_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 07:37:10 np0005536586 systemd[1]: libpod-conmon-47ee2f56ac07c6b62be21775291dda515489ec628e10ea5328584a9b56ad78c0.scope: Deactivated successfully.
Nov 26 07:37:11 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:11.137+0000 7f954fa56140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 26 07:37:11 np0005536586 ceph-mgr[75236]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 26 07:37:11 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'selftest'
Nov 26 07:37:11 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:11.349+0000 7f954fa56140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 26 07:37:11 np0005536586 ceph-mgr[75236]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 26 07:37:11 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'snap_schedule'
Nov 26 07:37:11 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:11.566+0000 7f954fa56140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 26 07:37:11 np0005536586 ceph-mgr[75236]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 26 07:37:11 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'stats'
Nov 26 07:37:11 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'status'
Nov 26 07:37:12 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:12.008+0000 7f954fa56140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 26 07:37:12 np0005536586 ceph-mgr[75236]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 26 07:37:12 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'telegraf'
Nov 26 07:37:12 np0005536586 podman[75569]: 2025-11-26 12:37:12.143096347 +0000 UTC m=+0.025741777 container create 902535c376d03cb7cfa57507fa9de3ea6377ed650a56e54e69217d0dd09c99fa (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:37:12 np0005536586 systemd[1]: Started libpod-conmon-902535c376d03cb7cfa57507fa9de3ea6377ed650a56e54e69217d0dd09c99fa.scope.
Nov 26 07:37:12 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:12 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af8e84016a22324c9245c04300e0ba0a1da3eb2e412be0bc04b0eac2c407b379/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:12 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af8e84016a22324c9245c04300e0ba0a1da3eb2e412be0bc04b0eac2c407b379/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:12 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af8e84016a22324c9245c04300e0ba0a1da3eb2e412be0bc04b0eac2c407b379/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:12 np0005536586 podman[75569]: 2025-11-26 12:37:12.197103646 +0000 UTC m=+0.079749076 container init 902535c376d03cb7cfa57507fa9de3ea6377ed650a56e54e69217d0dd09c99fa (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:12 np0005536586 podman[75569]: 2025-11-26 12:37:12.201391506 +0000 UTC m=+0.084036937 container start 902535c376d03cb7cfa57507fa9de3ea6377ed650a56e54e69217d0dd09c99fa (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 07:37:12 np0005536586 podman[75569]: 2025-11-26 12:37:12.202463256 +0000 UTC m=+0.085108686 container attach 902535c376d03cb7cfa57507fa9de3ea6377ed650a56e54e69217d0dd09c99fa (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 07:37:12 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:12.214+0000 7f954fa56140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 26 07:37:12 np0005536586 ceph-mgr[75236]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 26 07:37:12 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'telemetry'
Nov 26 07:37:12 np0005536586 podman[75569]: 2025-11-26 12:37:12.132386075 +0000 UTC m=+0.015031525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:12 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 07:37:12 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2517756581' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]: 
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]: {
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    "fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    "health": {
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "status": "HEALTH_OK",
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "checks": {},
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "mutes": []
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    },
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    "election_epoch": 5,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    "quorum": [
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        0
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    ],
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    "quorum_names": [
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "compute-0"
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    ],
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    "quorum_age": 16,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    "monmap": {
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "epoch": 1,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "min_mon_release_name": "reef",
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "num_mons": 1
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    },
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    "osdmap": {
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "epoch": 1,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "num_osds": 0,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "num_up_osds": 0,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "osd_up_since": 0,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "num_in_osds": 0,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "osd_in_since": 0,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "num_remapped_pgs": 0
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    },
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    "pgmap": {
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "pgs_by_state": [],
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "num_pgs": 0,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "num_pools": 0,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "num_objects": 0,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "data_bytes": 0,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "bytes_used": 0,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "bytes_avail": 0,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "bytes_total": 0
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    },
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    "fsmap": {
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "epoch": 1,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "by_rank": [],
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "up:standby": 0
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    },
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    "mgrmap": {
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "available": false,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "num_standbys": 0,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "modules": [
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:            "iostat",
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:            "nfs",
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:            "restful"
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        ],
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "services": {}
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    },
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    "servicemap": {
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "epoch": 1,
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "modified": "2025-11-26T12:36:53.922147+0000",
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:        "services": {}
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    },
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]:    "progress_events": {}
Nov 26 07:37:12 np0005536586 gracious_visvesvaraya[75583]: }
Nov 26 07:37:12 np0005536586 systemd[1]: libpod-902535c376d03cb7cfa57507fa9de3ea6377ed650a56e54e69217d0dd09c99fa.scope: Deactivated successfully.
Nov 26 07:37:12 np0005536586 podman[75569]: 2025-11-26 12:37:12.524593473 +0000 UTC m=+0.407238913 container died 902535c376d03cb7cfa57507fa9de3ea6377ed650a56e54e69217d0dd09c99fa (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:37:12 np0005536586 systemd[1]: var-lib-containers-storage-overlay-af8e84016a22324c9245c04300e0ba0a1da3eb2e412be0bc04b0eac2c407b379-merged.mount: Deactivated successfully.
Nov 26 07:37:12 np0005536586 podman[75569]: 2025-11-26 12:37:12.549547706 +0000 UTC m=+0.432193135 container remove 902535c376d03cb7cfa57507fa9de3ea6377ed650a56e54e69217d0dd09c99fa (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:12 np0005536586 systemd[1]: libpod-conmon-902535c376d03cb7cfa57507fa9de3ea6377ed650a56e54e69217d0dd09c99fa.scope: Deactivated successfully.
Nov 26 07:37:12 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:12.730+0000 7f954fa56140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 26 07:37:12 np0005536586 ceph-mgr[75236]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 26 07:37:12 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'test_orchestrator'
Nov 26 07:37:13 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:13.305+0000 7f954fa56140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 26 07:37:13 np0005536586 ceph-mgr[75236]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 26 07:37:13 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'volumes'
Nov 26 07:37:13 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:13.919+0000 7f954fa56140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 26 07:37:13 np0005536586 ceph-mgr[75236]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 26 07:37:13 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'zabbix'
Nov 26 07:37:14 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:14.129+0000 7f954fa56140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: ms_deliver_dispatch: unhandled message 0x563413f2e420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.whkbdn
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: mgr handle_mgr_map Activating!
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: mgr handle_mgr_map I am now activating
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.whkbdn(active, starting, since 0.00515095s)
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).mds e1 all = 1
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.whkbdn", "id": "compute-0.whkbdn"} v 0) v1
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mgr metadata", "who": "compute-0.whkbdn", "id": "compute-0.whkbdn"}]: dispatch
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: balancer
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: crash
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [balancer INFO root] Starting
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : Manager daemon compute-0.whkbdn is now available
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: devicehealth
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:37:14
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [balancer INFO root] No pools available
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [devicehealth INFO root] Starting
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: iostat
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: nfs
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: orchestrator
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: pg_autoscaler
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: progress
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [progress INFO root] Loading...
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [progress INFO root] No stored events to load
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [progress INFO root] Loaded [] historic events
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [progress INFO root] Loaded OSDMap, ready.
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] recovery thread starting
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] starting setup
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: rbd_support
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: restful
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/mirror_snapshot_schedule"} v 0) v1
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [restful INFO root] server_addr: :: server_port: 8003
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/mirror_snapshot_schedule"}]: dispatch
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: status
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [restful WARNING root] server not running: no certificate configured
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: telemetry
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] PerfHandler: starting
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TaskHandler: starting
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/trash_purge_schedule"} v 0) v1
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/trash_purge_schedule"}]: dispatch
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] setup complete
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:14 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: volumes
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: Activating manager daemon compute-0.whkbdn
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: Manager daemon compute-0.whkbdn is now available
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/mirror_snapshot_schedule"}]: dispatch
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/trash_purge_schedule"}]: dispatch
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:14 np0005536586 podman[75697]: 2025-11-26 12:37:14.590136721 +0000 UTC m=+0.025052570 container create 87dfc247728d8940c5032e8e59313de57abb98d2e1d83751fcece4248a10716f (image=quay.io/ceph/ceph:v18, name=priceless_payne, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 07:37:14 np0005536586 systemd[1]: Started libpod-conmon-87dfc247728d8940c5032e8e59313de57abb98d2e1d83751fcece4248a10716f.scope.
Nov 26 07:37:14 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:14 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46edef56fadee100eed8d85b0abeb437c25749ff5d8c9c600443c67a45342dc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:14 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46edef56fadee100eed8d85b0abeb437c25749ff5d8c9c600443c67a45342dc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:14 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46edef56fadee100eed8d85b0abeb437c25749ff5d8c9c600443c67a45342dc7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:14 np0005536586 podman[75697]: 2025-11-26 12:37:14.638915968 +0000 UTC m=+0.073831847 container init 87dfc247728d8940c5032e8e59313de57abb98d2e1d83751fcece4248a10716f (image=quay.io/ceph/ceph:v18, name=priceless_payne, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 26 07:37:14 np0005536586 podman[75697]: 2025-11-26 12:37:14.643495157 +0000 UTC m=+0.078411016 container start 87dfc247728d8940c5032e8e59313de57abb98d2e1d83751fcece4248a10716f (image=quay.io/ceph/ceph:v18, name=priceless_payne, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 07:37:14 np0005536586 podman[75697]: 2025-11-26 12:37:14.644604739 +0000 UTC m=+0.079520596 container attach 87dfc247728d8940c5032e8e59313de57abb98d2e1d83751fcece4248a10716f (image=quay.io/ceph/ceph:v18, name=priceless_payne, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 26 07:37:14 np0005536586 podman[75697]: 2025-11-26 12:37:14.580033132 +0000 UTC m=+0.014949000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 07:37:14 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/695035198' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 07:37:14 np0005536586 priceless_payne[75711]: 
Nov 26 07:37:14 np0005536586 priceless_payne[75711]: {
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    "fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    "health": {
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "status": "HEALTH_OK",
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "checks": {},
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "mutes": []
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    },
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    "election_epoch": 5,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    "quorum": [
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        0
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    ],
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    "quorum_names": [
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "compute-0"
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    ],
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    "quorum_age": 19,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    "monmap": {
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "epoch": 1,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "min_mon_release_name": "reef",
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "num_mons": 1
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    },
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    "osdmap": {
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "epoch": 1,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "num_osds": 0,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "num_up_osds": 0,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "osd_up_since": 0,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "num_in_osds": 0,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "osd_in_since": 0,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "num_remapped_pgs": 0
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    },
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    "pgmap": {
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "pgs_by_state": [],
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "num_pgs": 0,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "num_pools": 0,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "num_objects": 0,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "data_bytes": 0,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "bytes_used": 0,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "bytes_avail": 0,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "bytes_total": 0
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    },
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    "fsmap": {
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "epoch": 1,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "by_rank": [],
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "up:standby": 0
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    },
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    "mgrmap": {
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "available": false,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "num_standbys": 0,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "modules": [
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:            "iostat",
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:            "nfs",
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:            "restful"
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        ],
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "services": {}
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    },
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    "servicemap": {
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "epoch": 1,
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "modified": "2025-11-26T12:36:53.922147+0000",
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:        "services": {}
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    },
Nov 26 07:37:14 np0005536586 priceless_payne[75711]:    "progress_events": {}
Nov 26 07:37:14 np0005536586 priceless_payne[75711]: }
Nov 26 07:37:14 np0005536586 systemd[1]: libpod-87dfc247728d8940c5032e8e59313de57abb98d2e1d83751fcece4248a10716f.scope: Deactivated successfully.
Nov 26 07:37:14 np0005536586 podman[75697]: 2025-11-26 12:37:14.96401459 +0000 UTC m=+0.398930448 container died 87dfc247728d8940c5032e8e59313de57abb98d2e1d83751fcece4248a10716f (image=quay.io/ceph/ceph:v18, name=priceless_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:37:14 np0005536586 systemd[1]: var-lib-containers-storage-overlay-46edef56fadee100eed8d85b0abeb437c25749ff5d8c9c600443c67a45342dc7-merged.mount: Deactivated successfully.
Nov 26 07:37:14 np0005536586 podman[75697]: 2025-11-26 12:37:14.9863829 +0000 UTC m=+0.421298759 container remove 87dfc247728d8940c5032e8e59313de57abb98d2e1d83751fcece4248a10716f (image=quay.io/ceph/ceph:v18, name=priceless_payne, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 26 07:37:14 np0005536586 systemd[1]: libpod-conmon-87dfc247728d8940c5032e8e59313de57abb98d2e1d83751fcece4248a10716f.scope: Deactivated successfully.
Nov 26 07:37:15 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.whkbdn(active, since 1.00955s)
Nov 26 07:37:16 np0005536586 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 07:37:16 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.whkbdn(active, since 2s)
Nov 26 07:37:17 np0005536586 podman[75747]: 2025-11-26 12:37:17.027252003 +0000 UTC m=+0.025162456 container create f70d0f856bd37d8fa52287091c01afd11154489c1ba6ec179ccc5b88b0d235e0 (image=quay.io/ceph/ceph:v18, name=competent_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 07:37:17 np0005536586 systemd[1]: Started libpod-conmon-f70d0f856bd37d8fa52287091c01afd11154489c1ba6ec179ccc5b88b0d235e0.scope.
Nov 26 07:37:17 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8694c675355fd7f294ad05f6c503edcf6f8eae94011d680db385e96b110559a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8694c675355fd7f294ad05f6c503edcf6f8eae94011d680db385e96b110559a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8694c675355fd7f294ad05f6c503edcf6f8eae94011d680db385e96b110559a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:17 np0005536586 podman[75747]: 2025-11-26 12:37:17.07970321 +0000 UTC m=+0.077613673 container init f70d0f856bd37d8fa52287091c01afd11154489c1ba6ec179ccc5b88b0d235e0 (image=quay.io/ceph/ceph:v18, name=competent_payne, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 26 07:37:17 np0005536586 podman[75747]: 2025-11-26 12:37:17.08329641 +0000 UTC m=+0.081206863 container start f70d0f856bd37d8fa52287091c01afd11154489c1ba6ec179ccc5b88b0d235e0 (image=quay.io/ceph/ceph:v18, name=competent_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 07:37:17 np0005536586 podman[75747]: 2025-11-26 12:37:17.084374101 +0000 UTC m=+0.082284554 container attach f70d0f856bd37d8fa52287091c01afd11154489c1ba6ec179ccc5b88b0d235e0 (image=quay.io/ceph/ceph:v18, name=competent_payne, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:37:17 np0005536586 podman[75747]: 2025-11-26 12:37:17.017280984 +0000 UTC m=+0.015191457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:17 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 07:37:17 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/511061859' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 07:37:17 np0005536586 competent_payne[75760]: 
Nov 26 07:37:17 np0005536586 competent_payne[75760]: {
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    "fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    "health": {
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "status": "HEALTH_OK",
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "checks": {},
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "mutes": []
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    },
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    "election_epoch": 5,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    "quorum": [
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        0
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    ],
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    "quorum_names": [
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "compute-0"
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    ],
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    "quorum_age": 21,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    "monmap": {
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "epoch": 1,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "min_mon_release_name": "reef",
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "num_mons": 1
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    },
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    "osdmap": {
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "epoch": 1,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "num_osds": 0,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "num_up_osds": 0,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "osd_up_since": 0,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "num_in_osds": 0,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "osd_in_since": 0,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "num_remapped_pgs": 0
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    },
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    "pgmap": {
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "pgs_by_state": [],
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "num_pgs": 0,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "num_pools": 0,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "num_objects": 0,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "data_bytes": 0,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "bytes_used": 0,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "bytes_avail": 0,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "bytes_total": 0
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    },
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    "fsmap": {
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "epoch": 1,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "by_rank": [],
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "up:standby": 0
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    },
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    "mgrmap": {
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "available": true,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "num_standbys": 0,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "modules": [
Nov 26 07:37:17 np0005536586 competent_payne[75760]:            "iostat",
Nov 26 07:37:17 np0005536586 competent_payne[75760]:            "nfs",
Nov 26 07:37:17 np0005536586 competent_payne[75760]:            "restful"
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        ],
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "services": {}
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    },
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    "servicemap": {
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "epoch": 1,
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "modified": "2025-11-26T12:36:53.922147+0000",
Nov 26 07:37:17 np0005536586 competent_payne[75760]:        "services": {}
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    },
Nov 26 07:37:17 np0005536586 competent_payne[75760]:    "progress_events": {}
Nov 26 07:37:17 np0005536586 competent_payne[75760]: }
Nov 26 07:37:17 np0005536586 systemd[1]: libpod-f70d0f856bd37d8fa52287091c01afd11154489c1ba6ec179ccc5b88b0d235e0.scope: Deactivated successfully.
Nov 26 07:37:17 np0005536586 podman[75747]: 2025-11-26 12:37:17.5700463 +0000 UTC m=+0.567956753 container died f70d0f856bd37d8fa52287091c01afd11154489c1ba6ec179ccc5b88b0d235e0 (image=quay.io/ceph/ceph:v18, name=competent_payne, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 07:37:17 np0005536586 systemd[1]: var-lib-containers-storage-overlay-d8694c675355fd7f294ad05f6c503edcf6f8eae94011d680db385e96b110559a-merged.mount: Deactivated successfully.
Nov 26 07:37:17 np0005536586 podman[75747]: 2025-11-26 12:37:17.593298096 +0000 UTC m=+0.591208549 container remove f70d0f856bd37d8fa52287091c01afd11154489c1ba6ec179ccc5b88b0d235e0 (image=quay.io/ceph/ceph:v18, name=competent_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 07:37:17 np0005536586 systemd[1]: libpod-conmon-f70d0f856bd37d8fa52287091c01afd11154489c1ba6ec179ccc5b88b0d235e0.scope: Deactivated successfully.
Nov 26 07:37:17 np0005536586 podman[75795]: 2025-11-26 12:37:17.633433755 +0000 UTC m=+0.025947576 container create 698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8 (image=quay.io/ceph/ceph:v18, name=nostalgic_vaughan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:37:17 np0005536586 systemd[1]: Started libpod-conmon-698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8.scope.
Nov 26 07:37:17 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ca63eb82c7645073d118288833baf8cc072172805f98f9a73f6fe9cd652260/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ca63eb82c7645073d118288833baf8cc072172805f98f9a73f6fe9cd652260/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ca63eb82c7645073d118288833baf8cc072172805f98f9a73f6fe9cd652260/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ca63eb82c7645073d118288833baf8cc072172805f98f9a73f6fe9cd652260/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:17 np0005536586 podman[75795]: 2025-11-26 12:37:17.680147681 +0000 UTC m=+0.072661512 container init 698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8 (image=quay.io/ceph/ceph:v18, name=nostalgic_vaughan, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:37:17 np0005536586 podman[75795]: 2025-11-26 12:37:17.684529527 +0000 UTC m=+0.077043348 container start 698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8 (image=quay.io/ceph/ceph:v18, name=nostalgic_vaughan, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 07:37:17 np0005536586 podman[75795]: 2025-11-26 12:37:17.685636444 +0000 UTC m=+0.078150275 container attach 698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8 (image=quay.io/ceph/ceph:v18, name=nostalgic_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:37:17 np0005536586 podman[75795]: 2025-11-26 12:37:17.623407542 +0000 UTC m=+0.015921383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 26 07:37:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1377714412' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 07:37:18 np0005536586 systemd[1]: libpod-698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8.scope: Deactivated successfully.
Nov 26 07:37:18 np0005536586 conmon[75810]: conmon 698413d143694e7fd4d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8.scope/container/memory.events
Nov 26 07:37:18 np0005536586 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 07:37:18 np0005536586 podman[75836]: 2025-11-26 12:37:18.138129762 +0000 UTC m=+0.017065558 container died 698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8 (image=quay.io/ceph/ceph:v18, name=nostalgic_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 07:37:18 np0005536586 systemd[1]: var-lib-containers-storage-overlay-55ca63eb82c7645073d118288833baf8cc072172805f98f9a73f6fe9cd652260-merged.mount: Deactivated successfully.
Nov 26 07:37:18 np0005536586 podman[75836]: 2025-11-26 12:37:18.1585231 +0000 UTC m=+0.037458896 container remove 698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8 (image=quay.io/ceph/ceph:v18, name=nostalgic_vaughan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 07:37:18 np0005536586 systemd[1]: libpod-conmon-698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8.scope: Deactivated successfully.
Nov 26 07:37:18 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/1377714412' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 07:37:18 np0005536586 podman[75848]: 2025-11-26 12:37:18.203409392 +0000 UTC m=+0.028662531 container create e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8 (image=quay.io/ceph/ceph:v18, name=admiring_sammet, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:18 np0005536586 systemd[1]: Started libpod-conmon-e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8.scope.
Nov 26 07:37:18 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:18 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a269506520bb9c40e073cf3e15c76ad00855e20b1cf89f025bb12205da1e03f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:18 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a269506520bb9c40e073cf3e15c76ad00855e20b1cf89f025bb12205da1e03f3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:18 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a269506520bb9c40e073cf3e15c76ad00855e20b1cf89f025bb12205da1e03f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:18 np0005536586 podman[75848]: 2025-11-26 12:37:18.2553942 +0000 UTC m=+0.080647350 container init e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8 (image=quay.io/ceph/ceph:v18, name=admiring_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 07:37:18 np0005536586 podman[75848]: 2025-11-26 12:37:18.259028929 +0000 UTC m=+0.084282069 container start e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8 (image=quay.io/ceph/ceph:v18, name=admiring_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 07:37:18 np0005536586 podman[75848]: 2025-11-26 12:37:18.260010388 +0000 UTC m=+0.085263528 container attach e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8 (image=quay.io/ceph/ceph:v18, name=admiring_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 07:37:18 np0005536586 podman[75848]: 2025-11-26 12:37:18.191487367 +0000 UTC m=+0.016740527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 26 07:37:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/849019992' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 26 07:37:19 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/849019992' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 26 07:37:19 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/849019992' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 26 07:37:19 np0005536586 ceph-mgr[75236]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 26 07:37:19 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.whkbdn(active, since 5s)
Nov 26 07:37:19 np0005536586 systemd[1]: libpod-e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8.scope: Deactivated successfully.
Nov 26 07:37:19 np0005536586 conmon[75861]: conmon e28d76525d5f01d9f39c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8.scope/container/memory.events
Nov 26 07:37:19 np0005536586 podman[75887]: 2025-11-26 12:37:19.240655341 +0000 UTC m=+0.014503600 container died e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8 (image=quay.io/ceph/ceph:v18, name=admiring_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 07:37:19 np0005536586 systemd[1]: var-lib-containers-storage-overlay-a269506520bb9c40e073cf3e15c76ad00855e20b1cf89f025bb12205da1e03f3-merged.mount: Deactivated successfully.
Nov 26 07:37:19 np0005536586 podman[75887]: 2025-11-26 12:37:19.261159679 +0000 UTC m=+0.035007919 container remove e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8 (image=quay.io/ceph/ceph:v18, name=admiring_sammet, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:19 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: ignoring --setuser ceph since I am not root
Nov 26 07:37:19 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: ignoring --setgroup ceph since I am not root
Nov 26 07:37:19 np0005536586 systemd[1]: libpod-conmon-e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8.scope: Deactivated successfully.
Nov 26 07:37:19 np0005536586 ceph-mgr[75236]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 26 07:37:19 np0005536586 ceph-mgr[75236]: pidfile_write: ignore empty --pid-file
Nov 26 07:37:19 np0005536586 podman[75907]: 2025-11-26 12:37:19.303911028 +0000 UTC m=+0.026031523 container create 8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb (image=quay.io/ceph/ceph:v18, name=eloquent_curie, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:37:19 np0005536586 systemd[1]: Started libpod-conmon-8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb.scope.
Nov 26 07:37:19 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:19 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bff736057b69a0b4d91d92941060042aaa1ab186dc3d1a04ba502f52cee258c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:19 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bff736057b69a0b4d91d92941060042aaa1ab186dc3d1a04ba502f52cee258c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:19 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bff736057b69a0b4d91d92941060042aaa1ab186dc3d1a04ba502f52cee258c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:19 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'alerts'
Nov 26 07:37:19 np0005536586 podman[75907]: 2025-11-26 12:37:19.365967454 +0000 UTC m=+0.088087939 container init 8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb (image=quay.io/ceph/ceph:v18, name=eloquent_curie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 26 07:37:19 np0005536586 podman[75907]: 2025-11-26 12:37:19.370991101 +0000 UTC m=+0.093111586 container start 8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb (image=quay.io/ceph/ceph:v18, name=eloquent_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 07:37:19 np0005536586 podman[75907]: 2025-11-26 12:37:19.373778934 +0000 UTC m=+0.095899419 container attach 8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb (image=quay.io/ceph/ceph:v18, name=eloquent_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 07:37:19 np0005536586 podman[75907]: 2025-11-26 12:37:19.293557118 +0000 UTC m=+0.015677622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:19 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:19.629+0000 7f3615d7b140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 07:37:19 np0005536586 ceph-mgr[75236]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 07:37:19 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'balancer'
Nov 26 07:37:19 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 26 07:37:19 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/634867847' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 26 07:37:19 np0005536586 eloquent_curie[75937]: {
Nov 26 07:37:19 np0005536586 eloquent_curie[75937]:    "epoch": 5,
Nov 26 07:37:19 np0005536586 eloquent_curie[75937]:    "available": true,
Nov 26 07:37:19 np0005536586 eloquent_curie[75937]:    "active_name": "compute-0.whkbdn",
Nov 26 07:37:19 np0005536586 eloquent_curie[75937]:    "num_standby": 0
Nov 26 07:37:19 np0005536586 eloquent_curie[75937]: }
Nov 26 07:37:19 np0005536586 systemd[1]: libpod-8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb.scope: Deactivated successfully.
Nov 26 07:37:19 np0005536586 conmon[75937]: conmon 8b91ed3daf8912828f36 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb.scope/container/memory.events
Nov 26 07:37:19 np0005536586 podman[75907]: 2025-11-26 12:37:19.842017562 +0000 UTC m=+0.564138048 container died 8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb (image=quay.io/ceph/ceph:v18, name=eloquent_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 07:37:19 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:19.854+0000 7f3615d7b140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 07:37:19 np0005536586 ceph-mgr[75236]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 07:37:19 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'cephadm'
Nov 26 07:37:19 np0005536586 systemd[1]: var-lib-containers-storage-overlay-5bff736057b69a0b4d91d92941060042aaa1ab186dc3d1a04ba502f52cee258c-merged.mount: Deactivated successfully.
Nov 26 07:37:19 np0005536586 podman[75907]: 2025-11-26 12:37:19.870506405 +0000 UTC m=+0.592626891 container remove 8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb (image=quay.io/ceph/ceph:v18, name=eloquent_curie, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 07:37:19 np0005536586 systemd[1]: libpod-conmon-8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb.scope: Deactivated successfully.
Nov 26 07:37:19 np0005536586 podman[75972]: 2025-11-26 12:37:19.910179584 +0000 UTC m=+0.027050855 container create 56a720ad4777b8bc048165d6b6bdde6449dd86181c96c76df3543b6c3d0c5d0f (image=quay.io/ceph/ceph:v18, name=amazing_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 07:37:19 np0005536586 systemd[1]: Started libpod-conmon-56a720ad4777b8bc048165d6b6bdde6449dd86181c96c76df3543b6c3d0c5d0f.scope.
Nov 26 07:37:19 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:19 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88115fa611202d3d6546a0d20f181205f4afff689ccea602116b438f1ebe3644/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:19 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88115fa611202d3d6546a0d20f181205f4afff689ccea602116b438f1ebe3644/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:19 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88115fa611202d3d6546a0d20f181205f4afff689ccea602116b438f1ebe3644/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:19 np0005536586 podman[75972]: 2025-11-26 12:37:19.95549284 +0000 UTC m=+0.072364111 container init 56a720ad4777b8bc048165d6b6bdde6449dd86181c96c76df3543b6c3d0c5d0f (image=quay.io/ceph/ceph:v18, name=amazing_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:37:19 np0005536586 podman[75972]: 2025-11-26 12:37:19.959829392 +0000 UTC m=+0.076700652 container start 56a720ad4777b8bc048165d6b6bdde6449dd86181c96c76df3543b6c3d0c5d0f (image=quay.io/ceph/ceph:v18, name=amazing_kare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:37:19 np0005536586 podman[75972]: 2025-11-26 12:37:19.960928032 +0000 UTC m=+0.077799293 container attach 56a720ad4777b8bc048165d6b6bdde6449dd86181c96c76df3543b6c3d0c5d0f (image=quay.io/ceph/ceph:v18, name=amazing_kare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:37:19 np0005536586 podman[75972]: 2025-11-26 12:37:19.900297161 +0000 UTC m=+0.017168432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:20 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/849019992' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 26 07:37:21 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'crash'
Nov 26 07:37:21 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:21.717+0000 7f3615d7b140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 07:37:21 np0005536586 ceph-mgr[75236]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 07:37:21 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'dashboard'
Nov 26 07:37:22 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'devicehealth'
Nov 26 07:37:23 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:23.133+0000 7f3615d7b140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 26 07:37:23 np0005536586 ceph-mgr[75236]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 26 07:37:23 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'diskprediction_local'
Nov 26 07:37:23 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 26 07:37:23 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 26 07:37:23 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]:  from numpy import show_config as show_numpy_config
Nov 26 07:37:23 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:23.586+0000 7f3615d7b140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 26 07:37:23 np0005536586 ceph-mgr[75236]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 26 07:37:23 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'influx'
Nov 26 07:37:23 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:23.793+0000 7f3615d7b140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 26 07:37:23 np0005536586 ceph-mgr[75236]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 26 07:37:23 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'insights'
Nov 26 07:37:24 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'iostat'
Nov 26 07:37:24 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:24.203+0000 7f3615d7b140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 26 07:37:24 np0005536586 ceph-mgr[75236]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 26 07:37:24 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'k8sevents'
Nov 26 07:37:25 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'localpool'
Nov 26 07:37:25 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'mds_autoscaler'
Nov 26 07:37:26 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'mirroring'
Nov 26 07:37:26 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'nfs'
Nov 26 07:37:27 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:27.271+0000 7f3615d7b140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 26 07:37:27 np0005536586 ceph-mgr[75236]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 26 07:37:27 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'orchestrator'
Nov 26 07:37:27 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:27.844+0000 7f3615d7b140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 26 07:37:27 np0005536586 ceph-mgr[75236]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 26 07:37:27 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'osd_perf_query'
Nov 26 07:37:28 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:28.074+0000 7f3615d7b140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 26 07:37:28 np0005536586 ceph-mgr[75236]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 26 07:37:28 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'osd_support'
Nov 26 07:37:28 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:28.277+0000 7f3615d7b140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 26 07:37:28 np0005536586 ceph-mgr[75236]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 26 07:37:28 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'pg_autoscaler'
Nov 26 07:37:28 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:28.511+0000 7f3615d7b140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 26 07:37:28 np0005536586 ceph-mgr[75236]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 26 07:37:28 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'progress'
Nov 26 07:37:28 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:28.721+0000 7f3615d7b140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 26 07:37:28 np0005536586 ceph-mgr[75236]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 26 07:37:28 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'prometheus'
Nov 26 07:37:29 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:29.592+0000 7f3615d7b140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 26 07:37:29 np0005536586 ceph-mgr[75236]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 26 07:37:29 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'rbd_support'
Nov 26 07:37:29 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:29.852+0000 7f3615d7b140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 26 07:37:29 np0005536586 ceph-mgr[75236]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 26 07:37:29 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'restful'
Nov 26 07:37:30 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'rgw'
Nov 26 07:37:31 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:31.072+0000 7f3615d7b140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 26 07:37:31 np0005536586 ceph-mgr[75236]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 26 07:37:31 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'rook'
Nov 26 07:37:32 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:32.844+0000 7f3615d7b140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 26 07:37:32 np0005536586 ceph-mgr[75236]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 26 07:37:32 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'selftest'
Nov 26 07:37:33 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:33.055+0000 7f3615d7b140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 26 07:37:33 np0005536586 ceph-mgr[75236]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 26 07:37:33 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'snap_schedule'
Nov 26 07:37:33 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:33.271+0000 7f3615d7b140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 26 07:37:33 np0005536586 ceph-mgr[75236]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 26 07:37:33 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'stats'
Nov 26 07:37:33 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'status'
Nov 26 07:37:33 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:33.712+0000 7f3615d7b140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 26 07:37:33 np0005536586 ceph-mgr[75236]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 26 07:37:33 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'telegraf'
Nov 26 07:37:33 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:33.917+0000 7f3615d7b140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 26 07:37:33 np0005536586 ceph-mgr[75236]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 26 07:37:33 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'telemetry'
Nov 26 07:37:34 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:34.429+0000 7f3615d7b140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 26 07:37:34 np0005536586 ceph-mgr[75236]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 26 07:37:34 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'test_orchestrator'
Nov 26 07:37:35 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:35.002+0000 7f3615d7b140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'volumes'
Nov 26 07:37:35 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:35.618+0000 7f3615d7b140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr[py] Loading python module 'zabbix'
Nov 26 07:37:35 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:35.826+0000 7f3615d7b140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : Active manager daemon compute-0.whkbdn restarted
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.whkbdn
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: ms_deliver_dispatch: unhandled message 0x5583b00c11e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr handle_mgr_map Activating!
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr handle_mgr_map I am now activating
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.whkbdn(active, starting, since 0.00689892s)
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.whkbdn", "id": "compute-0.whkbdn"} v 0) v1
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mgr metadata", "who": "compute-0.whkbdn", "id": "compute-0.whkbdn"}]: dispatch
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).mds e1 all = 1
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: balancer
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : Manager daemon compute-0.whkbdn is now available
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Starting
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:37:35
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] No pools available
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: cephadm
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: crash
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: devicehealth
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: iostat
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [devicehealth INFO root] Starting
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: nfs
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: orchestrator
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: pg_autoscaler
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: progress
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [progress INFO root] Loading...
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [progress INFO root] No stored events to load
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [progress INFO root] Loaded [] historic events
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [progress INFO root] Loaded OSDMap, ready.
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] recovery thread starting
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] starting setup
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: rbd_support
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: restful
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: status
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: Active manager daemon compute-0.whkbdn restarted
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: Activating manager daemon compute-0.whkbdn
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: Manager daemon compute-0.whkbdn is now available
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: telemetry
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [restful INFO root] server_addr: :: server_port: 8003
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [restful WARNING root] server not running: no certificate configured
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/mirror_snapshot_schedule"} v 0) v1
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/mirror_snapshot_schedule"}]: dispatch
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] PerfHandler: starting
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TaskHandler: starting
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/trash_purge_schedule"} v 0) v1
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/trash_purge_schedule"}]: dispatch
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] setup complete
Nov 26 07:37:35 np0005536586 ceph-mgr[75236]: mgr load Constructed class from module: volumes
Nov 26 07:37:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019936638 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:37:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 26 07:37:36 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 26 07:37:36 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:36 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.whkbdn(active, since 1.00943s)
Nov 26 07:37:36 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 26 07:37:36 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 26 07:37:36 np0005536586 amazing_kare[75986]: {
Nov 26 07:37:36 np0005536586 amazing_kare[75986]:    "mgrmap_epoch": 7,
Nov 26 07:37:36 np0005536586 amazing_kare[75986]:    "initialized": true
Nov 26 07:37:36 np0005536586 amazing_kare[75986]: }
Nov 26 07:37:36 np0005536586 podman[75972]: 2025-11-26 12:37:36.856098349 +0000 UTC m=+16.972969611 container died 56a720ad4777b8bc048165d6b6bdde6449dd86181c96c76df3543b6c3d0c5d0f (image=quay.io/ceph/ceph:v18, name=amazing_kare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:37:36 np0005536586 systemd[1]: libpod-56a720ad4777b8bc048165d6b6bdde6449dd86181c96c76df3543b6c3d0c5d0f.scope: Deactivated successfully.
Nov 26 07:37:36 np0005536586 systemd[1]: var-lib-containers-storage-overlay-88115fa611202d3d6546a0d20f181205f4afff689ccea602116b438f1ebe3644-merged.mount: Deactivated successfully.
Nov 26 07:37:36 np0005536586 podman[75972]: 2025-11-26 12:37:36.880599808 +0000 UTC m=+16.997471070 container remove 56a720ad4777b8bc048165d6b6bdde6449dd86181c96c76df3543b6c3d0c5d0f (image=quay.io/ceph/ceph:v18, name=amazing_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:36 np0005536586 ceph-mon[74966]: Found migration_current of "None". Setting to last migration.
Nov 26 07:37:36 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/mirror_snapshot_schedule"}]: dispatch
Nov 26 07:37:36 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/trash_purge_schedule"}]: dispatch
Nov 26 07:37:36 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:36 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:36 np0005536586 systemd[1]: libpod-conmon-56a720ad4777b8bc048165d6b6bdde6449dd86181c96c76df3543b6c3d0c5d0f.scope: Deactivated successfully.
Nov 26 07:37:36 np0005536586 podman[76141]: 2025-11-26 12:37:36.922630089 +0000 UTC m=+0.026777158 container create a8188537d3cbfa3f4460f0466a96bd01ef105d9b764f60993c03062ecadbae65 (image=quay.io/ceph/ceph:v18, name=peaceful_haslett, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 07:37:36 np0005536586 systemd[1]: Started libpod-conmon-a8188537d3cbfa3f4460f0466a96bd01ef105d9b764f60993c03062ecadbae65.scope.
Nov 26 07:37:36 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:36 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7614b72c1034d523babd8a557c6ed27644b283095f08209071a29e60fa329238/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:36 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7614b72c1034d523babd8a557c6ed27644b283095f08209071a29e60fa329238/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:36 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7614b72c1034d523babd8a557c6ed27644b283095f08209071a29e60fa329238/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:36 np0005536586 podman[76141]: 2025-11-26 12:37:36.979537133 +0000 UTC m=+0.083684201 container init a8188537d3cbfa3f4460f0466a96bd01ef105d9b764f60993c03062ecadbae65 (image=quay.io/ceph/ceph:v18, name=peaceful_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 07:37:36 np0005536586 podman[76141]: 2025-11-26 12:37:36.983826083 +0000 UTC m=+0.087973153 container start a8188537d3cbfa3f4460f0466a96bd01ef105d9b764f60993c03062ecadbae65 (image=quay.io/ceph/ceph:v18, name=peaceful_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 07:37:36 np0005536586 podman[76141]: 2025-11-26 12:37:36.985044901 +0000 UTC m=+0.089191970 container attach a8188537d3cbfa3f4460f0466a96bd01ef105d9b764f60993c03062ecadbae65 (image=quay.io/ceph/ceph:v18, name=peaceful_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:37 np0005536586 podman[76141]: 2025-11-26 12:37:36.912097061 +0000 UTC m=+0.016244130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:37 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:37:37 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 26 07:37:37 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:37 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 07:37:37 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 07:37:37 np0005536586 systemd[1]: libpod-a8188537d3cbfa3f4460f0466a96bd01ef105d9b764f60993c03062ecadbae65.scope: Deactivated successfully.
Nov 26 07:37:37 np0005536586 podman[76183]: 2025-11-26 12:37:37.469960193 +0000 UTC m=+0.018724554 container died a8188537d3cbfa3f4460f0466a96bd01ef105d9b764f60993c03062ecadbae65 (image=quay.io/ceph/ceph:v18, name=peaceful_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 07:37:37 np0005536586 systemd[1]: var-lib-containers-storage-overlay-7614b72c1034d523babd8a557c6ed27644b283095f08209071a29e60fa329238-merged.mount: Deactivated successfully.
Nov 26 07:37:37 np0005536586 podman[76183]: 2025-11-26 12:37:37.48938193 +0000 UTC m=+0.038146272 container remove a8188537d3cbfa3f4460f0466a96bd01ef105d9b764f60993c03062ecadbae65 (image=quay.io/ceph/ceph:v18, name=peaceful_haslett, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 07:37:37 np0005536586 systemd[1]: libpod-conmon-a8188537d3cbfa3f4460f0466a96bd01ef105d9b764f60993c03062ecadbae65.scope: Deactivated successfully.
Nov 26 07:37:37 np0005536586 podman[76194]: 2025-11-26 12:37:37.529250678 +0000 UTC m=+0.024299549 container create d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308 (image=quay.io/ceph/ceph:v18, name=determined_galois, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:37:37 np0005536586 systemd[1]: Started libpod-conmon-d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308.scope.
Nov 26 07:37:37 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:37 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb45ca3bfdd67d3619f2b4d0d88a631fecc3bbd0bbe6375035364a2da445fee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:37 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb45ca3bfdd67d3619f2b4d0d88a631fecc3bbd0bbe6375035364a2da445fee/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:37 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb45ca3bfdd67d3619f2b4d0d88a631fecc3bbd0bbe6375035364a2da445fee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:37 np0005536586 podman[76194]: 2025-11-26 12:37:37.588825658 +0000 UTC m=+0.083874549 container init d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308 (image=quay.io/ceph/ceph:v18, name=determined_galois, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 07:37:37 np0005536586 podman[76194]: 2025-11-26 12:37:37.592395795 +0000 UTC m=+0.087444667 container start d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308 (image=quay.io/ceph/ceph:v18, name=determined_galois, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 07:37:37 np0005536586 podman[76194]: 2025-11-26 12:37:37.596077354 +0000 UTC m=+0.091126224 container attach d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308 (image=quay.io/ceph/ceph:v18, name=determined_galois, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 07:37:37 np0005536586 podman[76194]: 2025-11-26 12:37:37.519381711 +0000 UTC m=+0.014430612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:37 np0005536586 ceph-mgr[75236]: [cephadm INFO cherrypy.error] [26/Nov/2025:12:37:37] ENGINE Bus STARTING
Nov 26 07:37:37 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : [26/Nov/2025:12:37:37] ENGINE Bus STARTING
Nov 26 07:37:37 np0005536586 ceph-mgr[75236]: [cephadm INFO cherrypy.error] [26/Nov/2025:12:37:37] ENGINE Serving on https://192.168.122.100:7150
Nov 26 07:37:37 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : [26/Nov/2025:12:37:37] ENGINE Serving on https://192.168.122.100:7150
Nov 26 07:37:37 np0005536586 ceph-mgr[75236]: [cephadm INFO cherrypy.error] [26/Nov/2025:12:37:37] ENGINE Client ('192.168.122.100', 59656) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 26 07:37:37 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : [26/Nov/2025:12:37:37] ENGINE Client ('192.168.122.100', 59656) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 26 07:37:37 np0005536586 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 07:37:37 np0005536586 ceph-mgr[75236]: [cephadm INFO cherrypy.error] [26/Nov/2025:12:37:37] ENGINE Serving on http://192.168.122.100:8765
Nov 26 07:37:37 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : [26/Nov/2025:12:37:37] ENGINE Serving on http://192.168.122.100:8765
Nov 26 07:37:37 np0005536586 ceph-mgr[75236]: [cephadm INFO cherrypy.error] [26/Nov/2025:12:37:37] ENGINE Bus STARTED
Nov 26 07:37:37 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : [26/Nov/2025:12:37:37] ENGINE Bus STARTED
Nov 26 07:37:37 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 07:37:37 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 07:37:37 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:38 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:37:38 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 26 07:37:38 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:38 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Set ssh ssh_user
Nov 26 07:37:38 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 26 07:37:38 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 26 07:37:38 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:38 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Set ssh ssh_config
Nov 26 07:37:38 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 26 07:37:38 np0005536586 ceph-mgr[75236]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 26 07:37:38 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 26 07:37:38 np0005536586 determined_galois[76207]: ssh user set to ceph-admin. sudo will be used
Nov 26 07:37:38 np0005536586 systemd[1]: libpod-d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308.scope: Deactivated successfully.
Nov 26 07:37:38 np0005536586 conmon[76207]: conmon d5ef34add8e5251bd59c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308.scope/container/memory.events
Nov 26 07:37:38 np0005536586 podman[76256]: 2025-11-26 12:37:38.057292728 +0000 UTC m=+0.015778725 container died d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308 (image=quay.io/ceph/ceph:v18, name=determined_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 07:37:38 np0005536586 systemd[1]: var-lib-containers-storage-overlay-9bb45ca3bfdd67d3619f2b4d0d88a631fecc3bbd0bbe6375035364a2da445fee-merged.mount: Deactivated successfully.
Nov 26 07:37:38 np0005536586 podman[76256]: 2025-11-26 12:37:38.076466727 +0000 UTC m=+0.034952714 container remove d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308 (image=quay.io/ceph/ceph:v18, name=determined_galois, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 07:37:38 np0005536586 systemd[1]: libpod-conmon-d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308.scope: Deactivated successfully.
Nov 26 07:37:38 np0005536586 podman[76268]: 2025-11-26 12:37:38.119907104 +0000 UTC m=+0.027294554 container create 570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95 (image=quay.io/ceph/ceph:v18, name=frosty_khayyam, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 26 07:37:38 np0005536586 systemd[1]: Started libpod-conmon-570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95.scope.
Nov 26 07:37:38 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:38 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6322b1f44afc6bf0e0e7fed1595d763c1ac5284546fc66e9b5fa9a3f25df98d7/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:38 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6322b1f44afc6bf0e0e7fed1595d763c1ac5284546fc66e9b5fa9a3f25df98d7/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:38 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6322b1f44afc6bf0e0e7fed1595d763c1ac5284546fc66e9b5fa9a3f25df98d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:38 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6322b1f44afc6bf0e0e7fed1595d763c1ac5284546fc66e9b5fa9a3f25df98d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:38 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6322b1f44afc6bf0e0e7fed1595d763c1ac5284546fc66e9b5fa9a3f25df98d7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:38 np0005536586 podman[76268]: 2025-11-26 12:37:38.170825713 +0000 UTC m=+0.078213163 container init 570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95 (image=quay.io/ceph/ceph:v18, name=frosty_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 07:37:38 np0005536586 podman[76268]: 2025-11-26 12:37:38.175219052 +0000 UTC m=+0.082606501 container start 570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95 (image=quay.io/ceph/ceph:v18, name=frosty_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 07:37:38 np0005536586 podman[76268]: 2025-11-26 12:37:38.176322681 +0000 UTC m=+0.083710131 container attach 570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95 (image=quay.io/ceph/ceph:v18, name=frosty_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:38 np0005536586 podman[76268]: 2025-11-26 12:37:38.108024423 +0000 UTC m=+0.015411873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:38 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.whkbdn(active, since 2s)
Nov 26 07:37:38 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:37:38 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 26 07:37:38 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:38 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 26 07:37:38 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 26 07:37:38 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Set ssh private key
Nov 26 07:37:38 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 26 07:37:38 np0005536586 systemd[1]: libpod-570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95.scope: Deactivated successfully.
Nov 26 07:37:38 np0005536586 conmon[76281]: conmon 570e1654906cb8d6d8ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95.scope/container/memory.events
Nov 26 07:37:38 np0005536586 podman[76268]: 2025-11-26 12:37:38.613232806 +0000 UTC m=+0.520620255 container died 570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95 (image=quay.io/ceph/ceph:v18, name=frosty_khayyam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 07:37:38 np0005536586 systemd[1]: var-lib-containers-storage-overlay-6322b1f44afc6bf0e0e7fed1595d763c1ac5284546fc66e9b5fa9a3f25df98d7-merged.mount: Deactivated successfully.
Nov 26 07:37:38 np0005536586 podman[76268]: 2025-11-26 12:37:38.634091971 +0000 UTC m=+0.541479421 container remove 570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95 (image=quay.io/ceph/ceph:v18, name=frosty_khayyam, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:38 np0005536586 systemd[1]: libpod-conmon-570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95.scope: Deactivated successfully.
Nov 26 07:37:38 np0005536586 podman[76314]: 2025-11-26 12:37:38.672991903 +0000 UTC m=+0.026662572 container create 4678fb52268e854774346d060177f2442d72d3b821982d3b78d53c97ba046b8f (image=quay.io/ceph/ceph:v18, name=determined_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 07:37:38 np0005536586 systemd[1]: Started libpod-conmon-4678fb52268e854774346d060177f2442d72d3b821982d3b78d53c97ba046b8f.scope.
Nov 26 07:37:38 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:38 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0686c22330c4b484cb4ad8b55ab60ede749ae1b6b3f4859484f9750a07a866/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:38 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0686c22330c4b484cb4ad8b55ab60ede749ae1b6b3f4859484f9750a07a866/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:38 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0686c22330c4b484cb4ad8b55ab60ede749ae1b6b3f4859484f9750a07a866/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:38 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0686c22330c4b484cb4ad8b55ab60ede749ae1b6b3f4859484f9750a07a866/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:38 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0686c22330c4b484cb4ad8b55ab60ede749ae1b6b3f4859484f9750a07a866/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:38 np0005536586 podman[76314]: 2025-11-26 12:37:38.733609889 +0000 UTC m=+0.087280568 container init 4678fb52268e854774346d060177f2442d72d3b821982d3b78d53c97ba046b8f (image=quay.io/ceph/ceph:v18, name=determined_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 07:37:38 np0005536586 podman[76314]: 2025-11-26 12:37:38.739202668 +0000 UTC m=+0.092873337 container start 4678fb52268e854774346d060177f2442d72d3b821982d3b78d53c97ba046b8f (image=quay.io/ceph/ceph:v18, name=determined_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:38 np0005536586 podman[76314]: 2025-11-26 12:37:38.740375067 +0000 UTC m=+0.094045736 container attach 4678fb52268e854774346d060177f2442d72d3b821982d3b78d53c97ba046b8f (image=quay.io/ceph/ceph:v18, name=determined_ritchie, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:37:38 np0005536586 podman[76314]: 2025-11-26 12:37:38.662044554 +0000 UTC m=+0.015715243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:39 np0005536586 ceph-mon[74966]: [26/Nov/2025:12:37:37] ENGINE Bus STARTING
Nov 26 07:37:39 np0005536586 ceph-mon[74966]: [26/Nov/2025:12:37:37] ENGINE Serving on https://192.168.122.100:7150
Nov 26 07:37:39 np0005536586 ceph-mon[74966]: [26/Nov/2025:12:37:37] ENGINE Client ('192.168.122.100', 59656) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 26 07:37:39 np0005536586 ceph-mon[74966]: [26/Nov/2025:12:37:37] ENGINE Serving on http://192.168.122.100:8765
Nov 26 07:37:39 np0005536586 ceph-mon[74966]: [26/Nov/2025:12:37:37] ENGINE Bus STARTED
Nov 26 07:37:39 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:39 np0005536586 ceph-mon[74966]: Set ssh ssh_user
Nov 26 07:37:39 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:39 np0005536586 ceph-mon[74966]: Set ssh ssh_config
Nov 26 07:37:39 np0005536586 ceph-mon[74966]: ssh user set to ceph-admin. sudo will be used
Nov 26 07:37:39 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:39 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:37:39 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 26 07:37:39 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:39 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 26 07:37:39 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 26 07:37:39 np0005536586 systemd[1]: libpod-4678fb52268e854774346d060177f2442d72d3b821982d3b78d53c97ba046b8f.scope: Deactivated successfully.
Nov 26 07:37:39 np0005536586 podman[76355]: 2025-11-26 12:37:39.202316359 +0000 UTC m=+0.015765509 container died 4678fb52268e854774346d060177f2442d72d3b821982d3b78d53c97ba046b8f (image=quay.io/ceph/ceph:v18, name=determined_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:37:39 np0005536586 systemd[1]: var-lib-containers-storage-overlay-ef0686c22330c4b484cb4ad8b55ab60ede749ae1b6b3f4859484f9750a07a866-merged.mount: Deactivated successfully.
Nov 26 07:37:39 np0005536586 podman[76355]: 2025-11-26 12:37:39.223294148 +0000 UTC m=+0.036743300 container remove 4678fb52268e854774346d060177f2442d72d3b821982d3b78d53c97ba046b8f (image=quay.io/ceph/ceph:v18, name=determined_ritchie, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:37:39 np0005536586 systemd[1]: libpod-conmon-4678fb52268e854774346d060177f2442d72d3b821982d3b78d53c97ba046b8f.scope: Deactivated successfully.
Nov 26 07:37:39 np0005536586 podman[76366]: 2025-11-26 12:37:39.264578712 +0000 UTC m=+0.024920650 container create 7279d42ab0c0563bb9dea345458998e3f714c22efa54fd00a4859c877921fc43 (image=quay.io/ceph/ceph:v18, name=beautiful_sanderson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:37:39 np0005536586 systemd[1]: Started libpod-conmon-7279d42ab0c0563bb9dea345458998e3f714c22efa54fd00a4859c877921fc43.scope.
Nov 26 07:37:39 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:39 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad760954371d96a888c9e50d061e4f2a1d169cf55271638034217dfa6b2439c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:39 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad760954371d96a888c9e50d061e4f2a1d169cf55271638034217dfa6b2439c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:39 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad760954371d96a888c9e50d061e4f2a1d169cf55271638034217dfa6b2439c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:39 np0005536586 podman[76366]: 2025-11-26 12:37:39.317521395 +0000 UTC m=+0.077863323 container init 7279d42ab0c0563bb9dea345458998e3f714c22efa54fd00a4859c877921fc43 (image=quay.io/ceph/ceph:v18, name=beautiful_sanderson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:37:39 np0005536586 podman[76366]: 2025-11-26 12:37:39.32185931 +0000 UTC m=+0.082201238 container start 7279d42ab0c0563bb9dea345458998e3f714c22efa54fd00a4859c877921fc43 (image=quay.io/ceph/ceph:v18, name=beautiful_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:39 np0005536586 podman[76366]: 2025-11-26 12:37:39.322949153 +0000 UTC m=+0.083291091 container attach 7279d42ab0c0563bb9dea345458998e3f714c22efa54fd00a4859c877921fc43 (image=quay.io/ceph/ceph:v18, name=beautiful_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 07:37:39 np0005536586 podman[76366]: 2025-11-26 12:37:39.254530488 +0000 UTC m=+0.014872426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:39 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:37:39 np0005536586 beautiful_sanderson[76379]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvzqcp66TiEKSJJXsCaM6pZVOjHowqaUwAacUNTuScNATnMclQNqJrFrKVDP5+ItZDGhKMg3QDhg15mF5ocMZkHASuETlMwuh9zgM5uBkTrVc6LQV4JpGbxJinCHqGe8PCuCaMFuwuJRLOsLe7inLXSzwXspd3jy9Udf9SYAtv83h9Rv4wzeNYpq7na5kMENHl4CegUrA4RCybyBeFdjje+D+XMFDI3INOocL3r6CpO3AWqzcq8jYiHNSSQ1KsCYNzA+9gjEpIZjPIYJ+h7yttsGh19F+AbPo9b9kckfAb2xJetlN5Kpgqdj047LKyY/fJNDKzP8/FGutWbvR3uF3/6c5UoVhhBmYzRuSX7+TFVWlwfPguFRplhlyehjUXcZEGh7Ci9SfjV+mJ4IVxh1S4wHbUGYtxYhY6bJkNZKEXs1nHHuy3z0PkcYt1FP0QIBqdGKzkXm/HUSN5E71JuSkP69bP+TA6sa0d36RvtdFh/G93Ywekdm+NQi+Wo6QR+G8= zuul@controller
Nov 26 07:37:39 np0005536586 systemd[1]: libpod-7279d42ab0c0563bb9dea345458998e3f714c22efa54fd00a4859c877921fc43.scope: Deactivated successfully.
Nov 26 07:37:39 np0005536586 podman[76366]: 2025-11-26 12:37:39.752060882 +0000 UTC m=+0.512402810 container died 7279d42ab0c0563bb9dea345458998e3f714c22efa54fd00a4859c877921fc43 (image=quay.io/ceph/ceph:v18, name=beautiful_sanderson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:37:39 np0005536586 systemd[1]: var-lib-containers-storage-overlay-aad760954371d96a888c9e50d061e4f2a1d169cf55271638034217dfa6b2439c-merged.mount: Deactivated successfully.
Nov 26 07:37:39 np0005536586 podman[76366]: 2025-11-26 12:37:39.773925573 +0000 UTC m=+0.534267501 container remove 7279d42ab0c0563bb9dea345458998e3f714c22efa54fd00a4859c877921fc43 (image=quay.io/ceph/ceph:v18, name=beautiful_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 07:37:39 np0005536586 systemd[1]: libpod-conmon-7279d42ab0c0563bb9dea345458998e3f714c22efa54fd00a4859c877921fc43.scope: Deactivated successfully.
Nov 26 07:37:39 np0005536586 podman[76414]: 2025-11-26 12:37:39.812976629 +0000 UTC m=+0.025007024 container create 237ccb0a258dd72a29e08e147d0c20ccd1f47d42fea2cf90b1f9185d7e9f7295 (image=quay.io/ceph/ceph:v18, name=amazing_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:39 np0005536586 systemd[1]: Started libpod-conmon-237ccb0a258dd72a29e08e147d0c20ccd1f47d42fea2cf90b1f9185d7e9f7295.scope.
Nov 26 07:37:39 np0005536586 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 07:37:39 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:39 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87b6d6a6efd90f6912e12787bed0b0cbbf220343cfa176e3b57559cf85179816/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:39 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87b6d6a6efd90f6912e12787bed0b0cbbf220343cfa176e3b57559cf85179816/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:39 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87b6d6a6efd90f6912e12787bed0b0cbbf220343cfa176e3b57559cf85179816/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:39 np0005536586 podman[76414]: 2025-11-26 12:37:39.86009136 +0000 UTC m=+0.072121745 container init 237ccb0a258dd72a29e08e147d0c20ccd1f47d42fea2cf90b1f9185d7e9f7295 (image=quay.io/ceph/ceph:v18, name=amazing_bouman, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 07:37:39 np0005536586 podman[76414]: 2025-11-26 12:37:39.864594646 +0000 UTC m=+0.076625041 container start 237ccb0a258dd72a29e08e147d0c20ccd1f47d42fea2cf90b1f9185d7e9f7295 (image=quay.io/ceph/ceph:v18, name=amazing_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 07:37:39 np0005536586 podman[76414]: 2025-11-26 12:37:39.865592506 +0000 UTC m=+0.077622911 container attach 237ccb0a258dd72a29e08e147d0c20ccd1f47d42fea2cf90b1f9185d7e9f7295 (image=quay.io/ceph/ceph:v18, name=amazing_bouman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 07:37:39 np0005536586 podman[76414]: 2025-11-26 12:37:39.802855136 +0000 UTC m=+0.014885542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:40 np0005536586 ceph-mon[74966]: Set ssh ssh_identity_key
Nov 26 07:37:40 np0005536586 ceph-mon[74966]: Set ssh private key
Nov 26 07:37:40 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:40 np0005536586 ceph-mon[74966]: Set ssh ssh_identity_pub
Nov 26 07:37:40 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:37:40 np0005536586 systemd[1]: Created slice User Slice of UID 42477.
Nov 26 07:37:40 np0005536586 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 26 07:37:40 np0005536586 systemd-logind[777]: New session 20 of user ceph-admin.
Nov 26 07:37:40 np0005536586 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 26 07:37:40 np0005536586 systemd[1]: Starting User Manager for UID 42477...
Nov 26 07:37:40 np0005536586 systemd[76457]: Queued start job for default target Main User Target.
Nov 26 07:37:40 np0005536586 systemd[76457]: Created slice User Application Slice.
Nov 26 07:37:40 np0005536586 systemd[76457]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 26 07:37:40 np0005536586 systemd[76457]: Started Daily Cleanup of User's Temporary Directories.
Nov 26 07:37:40 np0005536586 systemd[76457]: Reached target Paths.
Nov 26 07:37:40 np0005536586 systemd[76457]: Reached target Timers.
Nov 26 07:37:40 np0005536586 systemd[76457]: Starting D-Bus User Message Bus Socket...
Nov 26 07:37:40 np0005536586 systemd[76457]: Starting Create User's Volatile Files and Directories...
Nov 26 07:37:40 np0005536586 systemd[76457]: Finished Create User's Volatile Files and Directories.
Nov 26 07:37:40 np0005536586 systemd[76457]: Listening on D-Bus User Message Bus Socket.
Nov 26 07:37:40 np0005536586 systemd[76457]: Reached target Sockets.
Nov 26 07:37:40 np0005536586 systemd[76457]: Reached target Basic System.
Nov 26 07:37:40 np0005536586 systemd[76457]: Reached target Main User Target.
Nov 26 07:37:40 np0005536586 systemd[76457]: Startup finished in 93ms.
Nov 26 07:37:40 np0005536586 systemd[1]: Started User Manager for UID 42477.
Nov 26 07:37:40 np0005536586 systemd[1]: Started Session 20 of User ceph-admin.
Nov 26 07:37:40 np0005536586 systemd-logind[777]: New session 22 of user ceph-admin.
Nov 26 07:37:40 np0005536586 systemd[1]: Started Session 22 of User ceph-admin.
Nov 26 07:37:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053230 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:37:40 np0005536586 systemd-logind[777]: New session 23 of user ceph-admin.
Nov 26 07:37:40 np0005536586 systemd[1]: Started Session 23 of User ceph-admin.
Nov 26 07:37:41 np0005536586 systemd-logind[777]: New session 24 of user ceph-admin.
Nov 26 07:37:41 np0005536586 systemd[1]: Started Session 24 of User ceph-admin.
Nov 26 07:37:41 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 26 07:37:41 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 26 07:37:41 np0005536586 systemd-logind[777]: New session 25 of user ceph-admin.
Nov 26 07:37:41 np0005536586 systemd[1]: Started Session 25 of User ceph-admin.
Nov 26 07:37:41 np0005536586 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 07:37:41 np0005536586 systemd-logind[777]: New session 26 of user ceph-admin.
Nov 26 07:37:41 np0005536586 systemd[1]: Started Session 26 of User ceph-admin.
Nov 26 07:37:42 np0005536586 systemd-logind[777]: New session 27 of user ceph-admin.
Nov 26 07:37:42 np0005536586 systemd[1]: Started Session 27 of User ceph-admin.
Nov 26 07:37:42 np0005536586 systemd-logind[777]: New session 28 of user ceph-admin.
Nov 26 07:37:42 np0005536586 systemd[1]: Started Session 28 of User ceph-admin.
Nov 26 07:37:42 np0005536586 systemd-logind[777]: New session 29 of user ceph-admin.
Nov 26 07:37:42 np0005536586 systemd[1]: Started Session 29 of User ceph-admin.
Nov 26 07:37:43 np0005536586 ceph-mon[74966]: Deploying cephadm binary to compute-0
Nov 26 07:37:43 np0005536586 systemd-logind[777]: New session 30 of user ceph-admin.
Nov 26 07:37:43 np0005536586 systemd[1]: Started Session 30 of User ceph-admin.
Nov 26 07:37:43 np0005536586 systemd-logind[777]: New session 31 of user ceph-admin.
Nov 26 07:37:43 np0005536586 systemd[1]: Started Session 31 of User ceph-admin.
Nov 26 07:37:43 np0005536586 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 07:37:44 np0005536586 systemd-logind[777]: New session 32 of user ceph-admin.
Nov 26 07:37:44 np0005536586 systemd[1]: Started Session 32 of User ceph-admin.
Nov 26 07:37:44 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 07:37:44 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:44 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Added host compute-0
Nov 26 07:37:44 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 26 07:37:44 np0005536586 amazing_bouman[76427]: Added host 'compute-0' with addr '192.168.122.100'
Nov 26 07:37:44 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 07:37:44 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 07:37:44 np0005536586 systemd[1]: libpod-237ccb0a258dd72a29e08e147d0c20ccd1f47d42fea2cf90b1f9185d7e9f7295.scope: Deactivated successfully.
Nov 26 07:37:44 np0005536586 podman[76414]: 2025-11-26 12:37:44.343021603 +0000 UTC m=+4.555051988 container died 237ccb0a258dd72a29e08e147d0c20ccd1f47d42fea2cf90b1f9185d7e9f7295 (image=quay.io/ceph/ceph:v18, name=amazing_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 26 07:37:44 np0005536586 systemd[1]: var-lib-containers-storage-overlay-87b6d6a6efd90f6912e12787bed0b0cbbf220343cfa176e3b57559cf85179816-merged.mount: Deactivated successfully.
Nov 26 07:37:44 np0005536586 podman[76414]: 2025-11-26 12:37:44.37567303 +0000 UTC m=+4.587703415 container remove 237ccb0a258dd72a29e08e147d0c20ccd1f47d42fea2cf90b1f9185d7e9f7295 (image=quay.io/ceph/ceph:v18, name=amazing_bouman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 07:37:44 np0005536586 systemd[1]: libpod-conmon-237ccb0a258dd72a29e08e147d0c20ccd1f47d42fea2cf90b1f9185d7e9f7295.scope: Deactivated successfully.
Nov 26 07:37:44 np0005536586 podman[77092]: 2025-11-26 12:37:44.422416362 +0000 UTC m=+0.029061072 container create c594f485238837d6178eff0f09e0b9aec83d1badd79b41bcf93efc3ba1823d3f (image=quay.io/ceph/ceph:v18, name=gifted_lamport, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 07:37:44 np0005536586 systemd[1]: Started libpod-conmon-c594f485238837d6178eff0f09e0b9aec83d1badd79b41bcf93efc3ba1823d3f.scope.
Nov 26 07:37:44 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:44 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7485fe15a22fdb224e71c38104ed56d557fc469f722275480583c19b6f510a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:44 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7485fe15a22fdb224e71c38104ed56d557fc469f722275480583c19b6f510a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:44 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7485fe15a22fdb224e71c38104ed56d557fc469f722275480583c19b6f510a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:44 np0005536586 podman[77092]: 2025-11-26 12:37:44.473028522 +0000 UTC m=+0.079673254 container init c594f485238837d6178eff0f09e0b9aec83d1badd79b41bcf93efc3ba1823d3f (image=quay.io/ceph/ceph:v18, name=gifted_lamport, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:37:44 np0005536586 podman[77092]: 2025-11-26 12:37:44.478877845 +0000 UTC m=+0.085522556 container start c594f485238837d6178eff0f09e0b9aec83d1badd79b41bcf93efc3ba1823d3f (image=quay.io/ceph/ceph:v18, name=gifted_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:44 np0005536586 podman[77092]: 2025-11-26 12:37:44.480753959 +0000 UTC m=+0.087398680 container attach c594f485238837d6178eff0f09e0b9aec83d1badd79b41bcf93efc3ba1823d3f (image=quay.io/ceph/ceph:v18, name=gifted_lamport, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 07:37:44 np0005536586 podman[77092]: 2025-11-26 12:37:44.411553832 +0000 UTC m=+0.018198563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:44 np0005536586 podman[77206]: 2025-11-26 12:37:44.706726873 +0000 UTC m=+0.028399315 container create fa04181f1d93e1c0c8ee0a7b9e28db903eb2f9848b36ef722b5165079e08ddd9 (image=quay.io/ceph/ceph:v18, name=keen_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:37:44 np0005536586 systemd[1]: Started libpod-conmon-fa04181f1d93e1c0c8ee0a7b9e28db903eb2f9848b36ef722b5165079e08ddd9.scope.
Nov 26 07:37:44 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:44 np0005536586 podman[77206]: 2025-11-26 12:37:44.757215212 +0000 UTC m=+0.078887664 container init fa04181f1d93e1c0c8ee0a7b9e28db903eb2f9848b36ef722b5165079e08ddd9 (image=quay.io/ceph/ceph:v18, name=keen_jones, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 07:37:44 np0005536586 podman[77206]: 2025-11-26 12:37:44.761530834 +0000 UTC m=+0.083203266 container start fa04181f1d93e1c0c8ee0a7b9e28db903eb2f9848b36ef722b5165079e08ddd9 (image=quay.io/ceph/ceph:v18, name=keen_jones, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 07:37:44 np0005536586 podman[77206]: 2025-11-26 12:37:44.762954366 +0000 UTC m=+0.084626798 container attach fa04181f1d93e1c0c8ee0a7b9e28db903eb2f9848b36ef722b5165079e08ddd9 (image=quay.io/ceph/ceph:v18, name=keen_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 07:37:44 np0005536586 podman[77206]: 2025-11-26 12:37:44.695029101 +0000 UTC m=+0.016701553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:44 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:37:44 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 26 07:37:44 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 26 07:37:44 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 26 07:37:44 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:44 np0005536586 gifted_lamport[77134]: Scheduled mon update...
Nov 26 07:37:44 np0005536586 systemd[1]: libpod-c594f485238837d6178eff0f09e0b9aec83d1badd79b41bcf93efc3ba1823d3f.scope: Deactivated successfully.
Nov 26 07:37:44 np0005536586 podman[77092]: 2025-11-26 12:37:44.92931558 +0000 UTC m=+0.535960311 container died c594f485238837d6178eff0f09e0b9aec83d1badd79b41bcf93efc3ba1823d3f (image=quay.io/ceph/ceph:v18, name=gifted_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 07:37:44 np0005536586 systemd[1]: var-lib-containers-storage-overlay-5e7485fe15a22fdb224e71c38104ed56d557fc469f722275480583c19b6f510a-merged.mount: Deactivated successfully.
Nov 26 07:37:44 np0005536586 podman[77092]: 2025-11-26 12:37:44.952510728 +0000 UTC m=+0.559155428 container remove c594f485238837d6178eff0f09e0b9aec83d1badd79b41bcf93efc3ba1823d3f (image=quay.io/ceph/ceph:v18, name=gifted_lamport, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 07:37:44 np0005536586 systemd[1]: libpod-conmon-c594f485238837d6178eff0f09e0b9aec83d1badd79b41bcf93efc3ba1823d3f.scope: Deactivated successfully.
Nov 26 07:37:44 np0005536586 podman[77255]: 2025-11-26 12:37:44.997281341 +0000 UTC m=+0.031545664 container create 6703e6f6cfdbba15d23dd9221bd68e4d9b88778c680a3bba035d6a0494939fc4 (image=quay.io/ceph/ceph:v18, name=inspiring_albattani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:37:45 np0005536586 keen_jones[77229]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 26 07:37:45 np0005536586 podman[77206]: 2025-11-26 12:37:45.019137025 +0000 UTC m=+0.340809467 container died fa04181f1d93e1c0c8ee0a7b9e28db903eb2f9848b36ef722b5165079e08ddd9 (image=quay.io/ceph/ceph:v18, name=keen_jones, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:45 np0005536586 systemd[1]: Started libpod-conmon-6703e6f6cfdbba15d23dd9221bd68e4d9b88778c680a3bba035d6a0494939fc4.scope.
Nov 26 07:37:45 np0005536586 systemd[1]: libpod-fa04181f1d93e1c0c8ee0a7b9e28db903eb2f9848b36ef722b5165079e08ddd9.scope: Deactivated successfully.
Nov 26 07:37:45 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:45 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d69e9e036d214a598ef9d795df0a838b13b207be2363554f7f5a48a46159359/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:45 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d69e9e036d214a598ef9d795df0a838b13b207be2363554f7f5a48a46159359/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:45 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d69e9e036d214a598ef9d795df0a838b13b207be2363554f7f5a48a46159359/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:45 np0005536586 podman[77255]: 2025-11-26 12:37:45.056338266 +0000 UTC m=+0.090602599 container init 6703e6f6cfdbba15d23dd9221bd68e4d9b88778c680a3bba035d6a0494939fc4 (image=quay.io/ceph/ceph:v18, name=inspiring_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:37:45 np0005536586 podman[77255]: 2025-11-26 12:37:45.060715624 +0000 UTC m=+0.094979946 container start 6703e6f6cfdbba15d23dd9221bd68e4d9b88778c680a3bba035d6a0494939fc4 (image=quay.io/ceph/ceph:v18, name=inspiring_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:37:45 np0005536586 podman[77206]: 2025-11-26 12:37:45.061652699 +0000 UTC m=+0.383325132 container remove fa04181f1d93e1c0c8ee0a7b9e28db903eb2f9848b36ef722b5165079e08ddd9 (image=quay.io/ceph/ceph:v18, name=keen_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:45 np0005536586 podman[77255]: 2025-11-26 12:37:45.065206416 +0000 UTC m=+0.099470759 container attach 6703e6f6cfdbba15d23dd9221bd68e4d9b88778c680a3bba035d6a0494939fc4 (image=quay.io/ceph/ceph:v18, name=inspiring_albattani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 07:37:45 np0005536586 systemd[1]: libpod-conmon-fa04181f1d93e1c0c8ee0a7b9e28db903eb2f9848b36ef722b5165079e08ddd9.scope: Deactivated successfully.
Nov 26 07:37:45 np0005536586 podman[77255]: 2025-11-26 12:37:44.983199096 +0000 UTC m=+0.017463419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 26 07:37:45 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:45 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:45 np0005536586 ceph-mon[74966]: Added host compute-0
Nov 26 07:37:45 np0005536586 ceph-mon[74966]: Saving service mon spec with placement count:5
Nov 26 07:37:45 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:45 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:45 np0005536586 systemd[1]: var-lib-containers-storage-overlay-5358e874143d7d19fd0dd7cbfce8f0afdccf51b15b61c1bdcefe3061eba472cf-merged.mount: Deactivated successfully.
Nov 26 07:37:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:37:45 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:45 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:37:45 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 26 07:37:45 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 26 07:37:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 07:37:45 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:45 np0005536586 inspiring_albattani[77269]: Scheduled mgr update...
Nov 26 07:37:45 np0005536586 systemd[1]: libpod-6703e6f6cfdbba15d23dd9221bd68e4d9b88778c680a3bba035d6a0494939fc4.scope: Deactivated successfully.
Nov 26 07:37:45 np0005536586 podman[77255]: 2025-11-26 12:37:45.519892697 +0000 UTC m=+0.554157021 container died 6703e6f6cfdbba15d23dd9221bd68e4d9b88778c680a3bba035d6a0494939fc4 (image=quay.io/ceph/ceph:v18, name=inspiring_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:37:45 np0005536586 systemd[1]: var-lib-containers-storage-overlay-6d69e9e036d214a598ef9d795df0a838b13b207be2363554f7f5a48a46159359-merged.mount: Deactivated successfully.
Nov 26 07:37:45 np0005536586 podman[77255]: 2025-11-26 12:37:45.546586698 +0000 UTC m=+0.580851021 container remove 6703e6f6cfdbba15d23dd9221bd68e4d9b88778c680a3bba035d6a0494939fc4 (image=quay.io/ceph/ceph:v18, name=inspiring_albattani, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 07:37:45 np0005536586 systemd[1]: libpod-conmon-6703e6f6cfdbba15d23dd9221bd68e4d9b88778c680a3bba035d6a0494939fc4.scope: Deactivated successfully.
Nov 26 07:37:45 np0005536586 podman[77486]: 2025-11-26 12:37:45.592001696 +0000 UTC m=+0.029572616 container create 33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00 (image=quay.io/ceph/ceph:v18, name=nostalgic_zhukovsky, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 07:37:45 np0005536586 systemd[1]: Started libpod-conmon-33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00.scope.
Nov 26 07:37:45 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:45 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d6b5b21f466fb53038e27dde1b2330601f01b38b560a70dc85f9349b62763f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:45 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d6b5b21f466fb53038e27dde1b2330601f01b38b560a70dc85f9349b62763f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:45 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d6b5b21f466fb53038e27dde1b2330601f01b38b560a70dc85f9349b62763f8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:45 np0005536586 podman[77486]: 2025-11-26 12:37:45.638361324 +0000 UTC m=+0.075932254 container init 33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00 (image=quay.io/ceph/ceph:v18, name=nostalgic_zhukovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 07:37:45 np0005536586 podman[77486]: 2025-11-26 12:37:45.643242301 +0000 UTC m=+0.080813221 container start 33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00 (image=quay.io/ceph/ceph:v18, name=nostalgic_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:37:45 np0005536586 podman[77486]: 2025-11-26 12:37:45.64458936 +0000 UTC m=+0.082160301 container attach 33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00 (image=quay.io/ceph/ceph:v18, name=nostalgic_zhukovsky, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 07:37:45 np0005536586 podman[77486]: 2025-11-26 12:37:45.580023715 +0000 UTC m=+0.017594655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:45 np0005536586 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 07:37:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054713 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:37:45 np0005536586 podman[77624]: 2025-11-26 12:37:45.979829482 +0000 UTC m=+0.036720647 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 07:37:46 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:37:46 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Saving service crash spec with placement *
Nov 26 07:37:46 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 26 07:37:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 26 07:37:46 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:46 np0005536586 nostalgic_zhukovsky[77525]: Scheduled crash update...
Nov 26 07:37:46 np0005536586 systemd[1]: libpod-33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00.scope: Deactivated successfully.
Nov 26 07:37:46 np0005536586 conmon[77525]: conmon 33bdf644eb8edf00cf62 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00.scope/container/memory.events
Nov 26 07:37:46 np0005536586 podman[77486]: 2025-11-26 12:37:46.101523626 +0000 UTC m=+0.539094547 container died 33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00 (image=quay.io/ceph/ceph:v18, name=nostalgic_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:46 np0005536586 systemd[1]: var-lib-containers-storage-overlay-9d6b5b21f466fb53038e27dde1b2330601f01b38b560a70dc85f9349b62763f8-merged.mount: Deactivated successfully.
Nov 26 07:37:46 np0005536586 podman[77486]: 2025-11-26 12:37:46.125214298 +0000 UTC m=+0.562785219 container remove 33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00 (image=quay.io/ceph/ceph:v18, name=nostalgic_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 07:37:46 np0005536586 systemd[1]: libpod-conmon-33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00.scope: Deactivated successfully.
Nov 26 07:37:46 np0005536586 podman[77653]: 2025-11-26 12:37:46.168613197 +0000 UTC m=+0.028424062 container create 3e6d124e465d8a34ea21126be019b830dc33aad72ddc51070fc923fdc9aafac5 (image=quay.io/ceph/ceph:v18, name=nervous_carver, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:37:46 np0005536586 systemd[1]: Started libpod-conmon-3e6d124e465d8a34ea21126be019b830dc33aad72ddc51070fc923fdc9aafac5.scope.
Nov 26 07:37:46 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:46 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e180a1db322013b4b6cefa8462abddd3fb562182f88e1a1f8a5a4240a529f19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:46 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e180a1db322013b4b6cefa8462abddd3fb562182f88e1a1f8a5a4240a529f19/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:46 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e180a1db322013b4b6cefa8462abddd3fb562182f88e1a1f8a5a4240a529f19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:46 np0005536586 podman[77653]: 2025-11-26 12:37:46.221649697 +0000 UTC m=+0.081460572 container init 3e6d124e465d8a34ea21126be019b830dc33aad72ddc51070fc923fdc9aafac5 (image=quay.io/ceph/ceph:v18, name=nervous_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 07:37:46 np0005536586 podman[77653]: 2025-11-26 12:37:46.22741454 +0000 UTC m=+0.087225404 container start 3e6d124e465d8a34ea21126be019b830dc33aad72ddc51070fc923fdc9aafac5 (image=quay.io/ceph/ceph:v18, name=nervous_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 07:37:46 np0005536586 podman[77653]: 2025-11-26 12:37:46.228875643 +0000 UTC m=+0.088686508 container attach 3e6d124e465d8a34ea21126be019b830dc33aad72ddc51070fc923fdc9aafac5 (image=quay.io/ceph/ceph:v18, name=nervous_carver, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:37:46 np0005536586 podman[77653]: 2025-11-26 12:37:46.157423191 +0000 UTC m=+0.017234076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:46 np0005536586 podman[77671]: 2025-11-26 12:37:46.293856249 +0000 UTC m=+0.048739924 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:37:46 np0005536586 podman[77624]: 2025-11-26 12:37:46.297177358 +0000 UTC m=+0.354068513 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:37:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:37:46 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:46 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:46 np0005536586 ceph-mon[74966]: Saving service mgr spec with placement count:2
Nov 26 07:37:46 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:46 np0005536586 ceph-mon[74966]: Saving service crash spec with placement *
Nov 26 07:37:46 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:46 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 26 07:37:46 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1109739817' entity='client.admin' 
Nov 26 07:37:46 np0005536586 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77827 (sysctl)
Nov 26 07:37:46 np0005536586 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 26 07:37:46 np0005536586 systemd[1]: libpod-3e6d124e465d8a34ea21126be019b830dc33aad72ddc51070fc923fdc9aafac5.scope: Deactivated successfully.
Nov 26 07:37:46 np0005536586 podman[77653]: 2025-11-26 12:37:46.68731712 +0000 UTC m=+0.547127995 container died 3e6d124e465d8a34ea21126be019b830dc33aad72ddc51070fc923fdc9aafac5 (image=quay.io/ceph/ceph:v18, name=nervous_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 07:37:46 np0005536586 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 26 07:37:46 np0005536586 systemd[1]: var-lib-containers-storage-overlay-8e180a1db322013b4b6cefa8462abddd3fb562182f88e1a1f8a5a4240a529f19-merged.mount: Deactivated successfully.
Nov 26 07:37:46 np0005536586 podman[77653]: 2025-11-26 12:37:46.721441382 +0000 UTC m=+0.581252237 container remove 3e6d124e465d8a34ea21126be019b830dc33aad72ddc51070fc923fdc9aafac5 (image=quay.io/ceph/ceph:v18, name=nervous_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 26 07:37:46 np0005536586 systemd[1]: libpod-conmon-3e6d124e465d8a34ea21126be019b830dc33aad72ddc51070fc923fdc9aafac5.scope: Deactivated successfully.
Nov 26 07:37:46 np0005536586 podman[77843]: 2025-11-26 12:37:46.763095534 +0000 UTC m=+0.026280884 container create 001c82cd8629f2d99d30801990495825b473212ed04445438075c6ef04506186 (image=quay.io/ceph/ceph:v18, name=wizardly_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:46 np0005536586 systemd[1]: Started libpod-conmon-001c82cd8629f2d99d30801990495825b473212ed04445438075c6ef04506186.scope.
Nov 26 07:37:46 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:46 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5acd65f0f483fb5f2a4ebe9409404ba8f233797b8ff3bf17ac656049daef1c4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:46 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5acd65f0f483fb5f2a4ebe9409404ba8f233797b8ff3bf17ac656049daef1c4e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:46 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5acd65f0f483fb5f2a4ebe9409404ba8f233797b8ff3bf17ac656049daef1c4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:46 np0005536586 podman[77843]: 2025-11-26 12:37:46.824708304 +0000 UTC m=+0.087893674 container init 001c82cd8629f2d99d30801990495825b473212ed04445438075c6ef04506186 (image=quay.io/ceph/ceph:v18, name=wizardly_gauss, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:37:46 np0005536586 podman[77843]: 2025-11-26 12:37:46.829307962 +0000 UTC m=+0.092493311 container start 001c82cd8629f2d99d30801990495825b473212ed04445438075c6ef04506186 (image=quay.io/ceph/ceph:v18, name=wizardly_gauss, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:37:46 np0005536586 podman[77843]: 2025-11-26 12:37:46.830519525 +0000 UTC m=+0.093704874 container attach 001c82cd8629f2d99d30801990495825b473212ed04445438075c6ef04506186 (image=quay.io/ceph/ceph:v18, name=wizardly_gauss, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:46 np0005536586 podman[77843]: 2025-11-26 12:37:46.752409086 +0000 UTC m=+0.015594457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:47 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:37:47 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:47 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:37:47 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 26 07:37:47 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:47 np0005536586 systemd[1]: libpod-001c82cd8629f2d99d30801990495825b473212ed04445438075c6ef04506186.scope: Deactivated successfully.
Nov 26 07:37:47 np0005536586 podman[77843]: 2025-11-26 12:37:47.28607892 +0000 UTC m=+0.549264270 container died 001c82cd8629f2d99d30801990495825b473212ed04445438075c6ef04506186 (image=quay.io/ceph/ceph:v18, name=wizardly_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:47 np0005536586 systemd[1]: var-lib-containers-storage-overlay-5acd65f0f483fb5f2a4ebe9409404ba8f233797b8ff3bf17ac656049daef1c4e-merged.mount: Deactivated successfully.
Nov 26 07:37:47 np0005536586 podman[77843]: 2025-11-26 12:37:47.311395455 +0000 UTC m=+0.574580805 container remove 001c82cd8629f2d99d30801990495825b473212ed04445438075c6ef04506186 (image=quay.io/ceph/ceph:v18, name=wizardly_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 07:37:47 np0005536586 systemd[1]: libpod-conmon-001c82cd8629f2d99d30801990495825b473212ed04445438075c6ef04506186.scope: Deactivated successfully.
Nov 26 07:37:47 np0005536586 podman[78058]: 2025-11-26 12:37:47.361639553 +0000 UTC m=+0.031334297 container create 3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac (image=quay.io/ceph/ceph:v18, name=nifty_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:37:47 np0005536586 systemd[1]: Started libpod-conmon-3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac.scope.
Nov 26 07:37:47 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:47 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48aaac686691e8dba51cd4187218ddf3a4b6c50634e43ce14c0fe4ca529a0d50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:47 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48aaac686691e8dba51cd4187218ddf3a4b6c50634e43ce14c0fe4ca529a0d50/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:47 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48aaac686691e8dba51cd4187218ddf3a4b6c50634e43ce14c0fe4ca529a0d50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:47 np0005536586 podman[78058]: 2025-11-26 12:37:47.418838596 +0000 UTC m=+0.088533350 container init 3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac (image=quay.io/ceph/ceph:v18, name=nifty_gagarin, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:47 np0005536586 podman[78058]: 2025-11-26 12:37:47.424698509 +0000 UTC m=+0.094393242 container start 3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac (image=quay.io/ceph/ceph:v18, name=nifty_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 26 07:37:47 np0005536586 podman[78058]: 2025-11-26 12:37:47.426051187 +0000 UTC m=+0.095745941 container attach 3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac (image=quay.io/ceph/ceph:v18, name=nifty_gagarin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 07:37:47 np0005536586 podman[78058]: 2025-11-26 12:37:47.347540205 +0000 UTC m=+0.017234958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:47 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/1109739817' entity='client.admin' 
Nov 26 07:37:47 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:47 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:47 np0005536586 podman[78174]: 2025-11-26 12:37:47.676459466 +0000 UTC m=+0.028030341 container create 51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_varahamihira, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:47 np0005536586 systemd[1]: Started libpod-conmon-51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e.scope.
Nov 26 07:37:47 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:47 np0005536586 podman[78174]: 2025-11-26 12:37:47.729846556 +0000 UTC m=+0.081417441 container init 51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_varahamihira, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:37:47 np0005536586 podman[78174]: 2025-11-26 12:37:47.734331898 +0000 UTC m=+0.085902772 container start 51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_varahamihira, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 07:37:47 np0005536586 podman[78174]: 2025-11-26 12:37:47.735471606 +0000 UTC m=+0.087042480 container attach 51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 07:37:47 np0005536586 ecstatic_varahamihira[78206]: 167 167
Nov 26 07:37:47 np0005536586 systemd[1]: libpod-51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e.scope: Deactivated successfully.
Nov 26 07:37:47 np0005536586 conmon[78206]: conmon 51a6b1c9f0661a1eeca3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e.scope/container/memory.events
Nov 26 07:37:47 np0005536586 podman[78174]: 2025-11-26 12:37:47.663370802 +0000 UTC m=+0.014941696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:37:47 np0005536586 podman[78211]: 2025-11-26 12:37:47.769421911 +0000 UTC m=+0.018028543 container died 51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_varahamihira, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 07:37:47 np0005536586 systemd[1]: var-lib-containers-storage-overlay-c2e0706f8b4486625f92b60e4b452e6eaa53d1e01f65b4538d3a5c28d1fbde99-merged.mount: Deactivated successfully.
Nov 26 07:37:47 np0005536586 podman[78211]: 2025-11-26 12:37:47.787651852 +0000 UTC m=+0.036258485 container remove 51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 07:37:47 np0005536586 systemd[1]: libpod-conmon-51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e.scope: Deactivated successfully.
Nov 26 07:37:47 np0005536586 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 07:37:47 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:37:47 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 07:37:47 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:47 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Added label _admin to host compute-0
Nov 26 07:37:47 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 26 07:37:47 np0005536586 nifty_gagarin[78111]: Added label _admin to host compute-0
Nov 26 07:37:47 np0005536586 systemd[1]: libpod-3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac.scope: Deactivated successfully.
Nov 26 07:37:47 np0005536586 conmon[78111]: conmon 3a6406738e894a68fd3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac.scope/container/memory.events
Nov 26 07:37:47 np0005536586 podman[78224]: 2025-11-26 12:37:47.907001589 +0000 UTC m=+0.016387071 container died 3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac (image=quay.io/ceph/ceph:v18, name=nifty_gagarin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:47 np0005536586 systemd[1]: var-lib-containers-storage-overlay-48aaac686691e8dba51cd4187218ddf3a4b6c50634e43ce14c0fe4ca529a0d50-merged.mount: Deactivated successfully.
Nov 26 07:37:47 np0005536586 podman[78224]: 2025-11-26 12:37:47.927407421 +0000 UTC m=+0.036792882 container remove 3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac (image=quay.io/ceph/ceph:v18, name=nifty_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:37:47 np0005536586 systemd[1]: libpod-conmon-3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac.scope: Deactivated successfully.
Nov 26 07:37:47 np0005536586 podman[78236]: 2025-11-26 12:37:47.971207997 +0000 UTC m=+0.026399116 container create 0aa40e6d9c56811a5b552dd82b2bed50a0854f4ddb84c01d1e95f2728b9f9303 (image=quay.io/ceph/ceph:v18, name=great_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 07:37:47 np0005536586 systemd[1]: Started libpod-conmon-0aa40e6d9c56811a5b552dd82b2bed50a0854f4ddb84c01d1e95f2728b9f9303.scope.
Nov 26 07:37:48 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:48 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d222ebd8436fb18b10f1c37756d946ad217ed1314e073f7932cbe54dbdada8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:48 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d222ebd8436fb18b10f1c37756d946ad217ed1314e073f7932cbe54dbdada8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:48 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d222ebd8436fb18b10f1c37756d946ad217ed1314e073f7932cbe54dbdada8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:48 np0005536586 podman[78236]: 2025-11-26 12:37:48.024343002 +0000 UTC m=+0.079534141 container init 0aa40e6d9c56811a5b552dd82b2bed50a0854f4ddb84c01d1e95f2728b9f9303 (image=quay.io/ceph/ceph:v18, name=great_swanson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 07:37:48 np0005536586 podman[78236]: 2025-11-26 12:37:48.028243612 +0000 UTC m=+0.083434731 container start 0aa40e6d9c56811a5b552dd82b2bed50a0854f4ddb84c01d1e95f2728b9f9303 (image=quay.io/ceph/ceph:v18, name=great_swanson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:37:48 np0005536586 podman[78236]: 2025-11-26 12:37:48.02959022 +0000 UTC m=+0.084781339 container attach 0aa40e6d9c56811a5b552dd82b2bed50a0854f4ddb84c01d1e95f2728b9f9303 (image=quay.io/ceph/ceph:v18, name=great_swanson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 07:37:48 np0005536586 podman[78236]: 2025-11-26 12:37:47.960615487 +0000 UTC m=+0.015806626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:48 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 26 07:37:48 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/124655842' entity='client.admin' 
Nov 26 07:37:48 np0005536586 systemd[1]: libpod-0aa40e6d9c56811a5b552dd82b2bed50a0854f4ddb84c01d1e95f2728b9f9303.scope: Deactivated successfully.
Nov 26 07:37:48 np0005536586 podman[78275]: 2025-11-26 12:37:48.484194075 +0000 UTC m=+0.014703176 container died 0aa40e6d9c56811a5b552dd82b2bed50a0854f4ddb84c01d1e95f2728b9f9303 (image=quay.io/ceph/ceph:v18, name=great_swanson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 07:37:48 np0005536586 systemd[1]: var-lib-containers-storage-overlay-18d222ebd8436fb18b10f1c37756d946ad217ed1314e073f7932cbe54dbdada8-merged.mount: Deactivated successfully.
Nov 26 07:37:48 np0005536586 podman[78275]: 2025-11-26 12:37:48.502485813 +0000 UTC m=+0.032994914 container remove 0aa40e6d9c56811a5b552dd82b2bed50a0854f4ddb84c01d1e95f2728b9f9303 (image=quay.io/ceph/ceph:v18, name=great_swanson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 07:37:48 np0005536586 systemd[1]: libpod-conmon-0aa40e6d9c56811a5b552dd82b2bed50a0854f4ddb84c01d1e95f2728b9f9303.scope: Deactivated successfully.
Nov 26 07:37:48 np0005536586 podman[78287]: 2025-11-26 12:37:48.541447611 +0000 UTC m=+0.024064356 container create 8175e7784b1ae5cd14a9272291c6d24deb5b5b0d6c3f006cb13f654d4ed304e7 (image=quay.io/ceph/ceph:v18, name=agitated_proskuriakova, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 07:37:48 np0005536586 systemd[1]: Started libpod-conmon-8175e7784b1ae5cd14a9272291c6d24deb5b5b0d6c3f006cb13f654d4ed304e7.scope.
Nov 26 07:37:48 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:48 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f27538cc368d469a2f6036539083932e4da397062ef31c144a6f0d700ef65e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:48 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f27538cc368d469a2f6036539083932e4da397062ef31c144a6f0d700ef65e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:48 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f27538cc368d469a2f6036539083932e4da397062ef31c144a6f0d700ef65e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:48 np0005536586 podman[78287]: 2025-11-26 12:37:48.591186516 +0000 UTC m=+0.073803252 container init 8175e7784b1ae5cd14a9272291c6d24deb5b5b0d6c3f006cb13f654d4ed304e7 (image=quay.io/ceph/ceph:v18, name=agitated_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:37:48 np0005536586 podman[78287]: 2025-11-26 12:37:48.595637394 +0000 UTC m=+0.078254128 container start 8175e7784b1ae5cd14a9272291c6d24deb5b5b0d6c3f006cb13f654d4ed304e7 (image=quay.io/ceph/ceph:v18, name=agitated_proskuriakova, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 07:37:48 np0005536586 podman[78287]: 2025-11-26 12:37:48.596722828 +0000 UTC m=+0.079339564 container attach 8175e7784b1ae5cd14a9272291c6d24deb5b5b0d6c3f006cb13f654d4ed304e7 (image=quay.io/ceph/ceph:v18, name=agitated_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 07:37:48 np0005536586 podman[78287]: 2025-11-26 12:37:48.53141169 +0000 UTC m=+0.014028445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:48 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:48 np0005536586 ceph-mon[74966]: Added label _admin to host compute-0
Nov 26 07:37:48 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/124655842' entity='client.admin' 
Nov 26 07:37:49 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 26 07:37:49 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3505617280' entity='client.admin' 
Nov 26 07:37:49 np0005536586 agitated_proskuriakova[78301]: set mgr/dashboard/cluster/status
Nov 26 07:37:49 np0005536586 systemd[1]: libpod-8175e7784b1ae5cd14a9272291c6d24deb5b5b0d6c3f006cb13f654d4ed304e7.scope: Deactivated successfully.
Nov 26 07:37:49 np0005536586 podman[78287]: 2025-11-26 12:37:49.099656855 +0000 UTC m=+0.582273590 container died 8175e7784b1ae5cd14a9272291c6d24deb5b5b0d6c3f006cb13f654d4ed304e7 (image=quay.io/ceph/ceph:v18, name=agitated_proskuriakova, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 07:37:49 np0005536586 systemd[1]: var-lib-containers-storage-overlay-66f27538cc368d469a2f6036539083932e4da397062ef31c144a6f0d700ef65e-merged.mount: Deactivated successfully.
Nov 26 07:37:49 np0005536586 podman[78287]: 2025-11-26 12:37:49.118848369 +0000 UTC m=+0.601465114 container remove 8175e7784b1ae5cd14a9272291c6d24deb5b5b0d6c3f006cb13f654d4ed304e7 (image=quay.io/ceph/ceph:v18, name=agitated_proskuriakova, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:49 np0005536586 systemd[1]: libpod-conmon-8175e7784b1ae5cd14a9272291c6d24deb5b5b0d6c3f006cb13f654d4ed304e7.scope: Deactivated successfully.
Nov 26 07:37:49 np0005536586 podman[78344]: 2025-11-26 12:37:49.250483587 +0000 UTC m=+0.024502703 container create b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:49 np0005536586 systemd[1]: Started libpod-conmon-b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9.scope.
Nov 26 07:37:49 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:49 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ea6547ce3870fe2df3579c8bf259138f57e47e379e96209844734fe489dda21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:49 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ea6547ce3870fe2df3579c8bf259138f57e47e379e96209844734fe489dda21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:49 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ea6547ce3870fe2df3579c8bf259138f57e47e379e96209844734fe489dda21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:49 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ea6547ce3870fe2df3579c8bf259138f57e47e379e96209844734fe489dda21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:49 np0005536586 podman[78344]: 2025-11-26 12:37:49.301706519 +0000 UTC m=+0.075725624 container init b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:49 np0005536586 podman[78344]: 2025-11-26 12:37:49.306832338 +0000 UTC m=+0.080851444 container start b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:49 np0005536586 podman[78344]: 2025-11-26 12:37:49.307912804 +0000 UTC m=+0.081931910 container attach b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tu, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:37:49 np0005536586 podman[78344]: 2025-11-26 12:37:49.24059314 +0000 UTC m=+0.014612266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:37:49 np0005536586 python3[78387]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:37:49 np0005536586 podman[78388]: 2025-11-26 12:37:49.525367376 +0000 UTC m=+0.027443383 container create 7e59257c90074ebd5fab92b5517f8f0570ff01434c833ecd754ecc0540b98277 (image=quay.io/ceph/ceph:v18, name=wonderful_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 26 07:37:49 np0005536586 systemd[1]: Started libpod-conmon-7e59257c90074ebd5fab92b5517f8f0570ff01434c833ecd754ecc0540b98277.scope.
Nov 26 07:37:49 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:49 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be82d35f941cc99546b1e3db3cd581d08883085076e8260497be768cd336e97e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:49 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be82d35f941cc99546b1e3db3cd581d08883085076e8260497be768cd336e97e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:49 np0005536586 podman[78388]: 2025-11-26 12:37:49.582194789 +0000 UTC m=+0.084270797 container init 7e59257c90074ebd5fab92b5517f8f0570ff01434c833ecd754ecc0540b98277 (image=quay.io/ceph/ceph:v18, name=wonderful_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 07:37:49 np0005536586 podman[78388]: 2025-11-26 12:37:49.587158562 +0000 UTC m=+0.089234570 container start 7e59257c90074ebd5fab92b5517f8f0570ff01434c833ecd754ecc0540b98277 (image=quay.io/ceph/ceph:v18, name=wonderful_kalam, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 07:37:49 np0005536586 podman[78388]: 2025-11-26 12:37:49.588422865 +0000 UTC m=+0.090498873 container attach 7e59257c90074ebd5fab92b5517f8f0570ff01434c833ecd754ecc0540b98277 (image=quay.io/ceph/ceph:v18, name=wonderful_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:49 np0005536586 podman[78388]: 2025-11-26 12:37:49.514152323 +0000 UTC m=+0.016228351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:49 np0005536586 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1980961670' entity='client.admin' 
Nov 26 07:37:50 np0005536586 systemd[1]: libpod-7e59257c90074ebd5fab92b5517f8f0570ff01434c833ecd754ecc0540b98277.scope: Deactivated successfully.
Nov 26 07:37:50 np0005536586 podman[78438]: 2025-11-26 12:37:50.057951814 +0000 UTC m=+0.018320144 container died 7e59257c90074ebd5fab92b5517f8f0570ff01434c833ecd754ecc0540b98277 (image=quay.io/ceph/ceph:v18, name=wonderful_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 07:37:50 np0005536586 systemd[1]: var-lib-containers-storage-overlay-be82d35f941cc99546b1e3db3cd581d08883085076e8260497be768cd336e97e-merged.mount: Deactivated successfully.
Nov 26 07:37:50 np0005536586 podman[78438]: 2025-11-26 12:37:50.083286655 +0000 UTC m=+0.043654983 container remove 7e59257c90074ebd5fab92b5517f8f0570ff01434c833ecd754ecc0540b98277 (image=quay.io/ceph/ceph:v18, name=wonderful_kalam, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/3505617280' entity='client.admin' 
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/1980961670' entity='client.admin' 
Nov 26 07:37:50 np0005536586 systemd[1]: libpod-conmon-7e59257c90074ebd5fab92b5517f8f0570ff01434c833ecd754ecc0540b98277.scope: Deactivated successfully.
Nov 26 07:37:50 np0005536586 strange_tu[78357]: [
Nov 26 07:37:50 np0005536586 strange_tu[78357]:    {
Nov 26 07:37:50 np0005536586 strange_tu[78357]:        "available": false,
Nov 26 07:37:50 np0005536586 strange_tu[78357]:        "ceph_device": false,
Nov 26 07:37:50 np0005536586 strange_tu[78357]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:        "lsm_data": {},
Nov 26 07:37:50 np0005536586 strange_tu[78357]:        "lvs": [],
Nov 26 07:37:50 np0005536586 strange_tu[78357]:        "path": "/dev/sr0",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:        "rejected_reasons": [
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "Has a FileSystem",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "Insufficient space (<5GB)"
Nov 26 07:37:50 np0005536586 strange_tu[78357]:        ],
Nov 26 07:37:50 np0005536586 strange_tu[78357]:        "sys_api": {
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "actuators": null,
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "device_nodes": "sr0",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "devname": "sr0",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "human_readable_size": "474.00 KB",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "id_bus": "ata",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "model": "QEMU DVD-ROM",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "nr_requests": "64",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "parent": "/dev/sr0",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "partitions": {},
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "path": "/dev/sr0",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "removable": "1",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "rev": "2.5+",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "ro": "0",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "rotational": "1",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "sas_address": "",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "sas_device_handle": "",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "scheduler_mode": "mq-deadline",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "sectors": 0,
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "sectorsize": "2048",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "size": 485376.0,
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "support_discard": "2048",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "type": "disk",
Nov 26 07:37:50 np0005536586 strange_tu[78357]:            "vendor": "QEMU"
Nov 26 07:37:50 np0005536586 strange_tu[78357]:        }
Nov 26 07:37:50 np0005536586 strange_tu[78357]:    }
Nov 26 07:37:50 np0005536586 strange_tu[78357]: ]
Nov 26 07:37:50 np0005536586 systemd[1]: libpod-b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9.scope: Deactivated successfully.
Nov 26 07:37:50 np0005536586 systemd[1]: libpod-b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9.scope: Consumed 1.001s CPU time.
Nov 26 07:37:50 np0005536586 podman[78344]: 2025-11-26 12:37:50.328920877 +0000 UTC m=+1.102939982 container died b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tu, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 07:37:50 np0005536586 systemd[1]: var-lib-containers-storage-overlay-0ea6547ce3870fe2df3579c8bf259138f57e47e379e96209844734fe489dda21-merged.mount: Deactivated successfully.
Nov 26 07:37:50 np0005536586 podman[78344]: 2025-11-26 12:37:50.361186696 +0000 UTC m=+1.135205803 container remove b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:37:50 np0005536586 systemd[1]: libpod-conmon-b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9.scope: Deactivated successfully.
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:37:50 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 26 07:37:50 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 26 07:37:50 np0005536586 ansible-async_wrapper.py[80319]: Invoked with j939461483014 30 /home/zuul/.ansible/tmp/ansible-tmp-1764160670.3639514-36930-108437883879878/AnsiballZ_command.py _
Nov 26 07:37:50 np0005536586 ansible-async_wrapper.py[80400]: Starting module and watcher
Nov 26 07:37:50 np0005536586 ansible-async_wrapper.py[80400]: Start watching 80405 (30)
Nov 26 07:37:50 np0005536586 ansible-async_wrapper.py[80405]: Start module (80405)
Nov 26 07:37:50 np0005536586 ansible-async_wrapper.py[80319]: Return async_wrapper task started.
Nov 26 07:37:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:37:50 np0005536586 python3[80415]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:37:50 np0005536586 podman[80489]: 2025-11-26 12:37:50.992724238 +0000 UTC m=+0.029336692 container create 3e5ba29b37ba48a6d6b647a899a00fba6c3913e468f546fabb9479e7374a9733 (image=quay.io/ceph/ceph:v18, name=sweet_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:51 np0005536586 systemd[1]: Started libpod-conmon-3e5ba29b37ba48a6d6b647a899a00fba6c3913e468f546fabb9479e7374a9733.scope.
Nov 26 07:37:51 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:51 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660053ba9439ce274c44f6b68a32a2a616d430b40adbcf69def7f8a1c813fa95/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:51 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660053ba9439ce274c44f6b68a32a2a616d430b40adbcf69def7f8a1c813fa95/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:51 np0005536586 podman[80489]: 2025-11-26 12:37:51.053861683 +0000 UTC m=+0.090474167 container init 3e5ba29b37ba48a6d6b647a899a00fba6c3913e468f546fabb9479e7374a9733 (image=quay.io/ceph/ceph:v18, name=sweet_franklin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:51 np0005536586 podman[80489]: 2025-11-26 12:37:51.060501574 +0000 UTC m=+0.097114038 container start 3e5ba29b37ba48a6d6b647a899a00fba6c3913e468f546fabb9479e7374a9733 (image=quay.io/ceph/ceph:v18, name=sweet_franklin, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:37:51 np0005536586 podman[80489]: 2025-11-26 12:37:51.061810922 +0000 UTC m=+0.098423396 container attach 3e5ba29b37ba48a6d6b647a899a00fba6c3913e468f546fabb9479e7374a9733 (image=quay.io/ceph/ceph:v18, name=sweet_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 26 07:37:51 np0005536586 podman[80489]: 2025-11-26 12:37:50.982016912 +0000 UTC m=+0.018629405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:51 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.conf
Nov 26 07:37:51 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.conf
Nov 26 07:37:51 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:51 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:51 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:51 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:51 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 07:37:51 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:37:51 np0005536586 ceph-mon[74966]: Updating compute-0:/etc/ceph/ceph.conf
Nov 26 07:37:51 np0005536586 ceph-mon[74966]: Updating compute-0:/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.conf
Nov 26 07:37:51 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 07:37:51 np0005536586 sweet_franklin[80530]: 
Nov 26 07:37:51 np0005536586 sweet_franklin[80530]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 26 07:37:51 np0005536586 systemd[1]: libpod-3e5ba29b37ba48a6d6b647a899a00fba6c3913e468f546fabb9479e7374a9733.scope: Deactivated successfully.
Nov 26 07:37:51 np0005536586 podman[80489]: 2025-11-26 12:37:51.510614568 +0000 UTC m=+0.547227032 container died 3e5ba29b37ba48a6d6b647a899a00fba6c3913e468f546fabb9479e7374a9733 (image=quay.io/ceph/ceph:v18, name=sweet_franklin, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 07:37:51 np0005536586 systemd[1]: var-lib-containers-storage-overlay-660053ba9439ce274c44f6b68a32a2a616d430b40adbcf69def7f8a1c813fa95-merged.mount: Deactivated successfully.
Nov 26 07:37:51 np0005536586 podman[80489]: 2025-11-26 12:37:51.536209587 +0000 UTC m=+0.572822061 container remove 3e5ba29b37ba48a6d6b647a899a00fba6c3913e468f546fabb9479e7374a9733 (image=quay.io/ceph/ceph:v18, name=sweet_franklin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:51 np0005536586 systemd[1]: libpod-conmon-3e5ba29b37ba48a6d6b647a899a00fba6c3913e468f546fabb9479e7374a9733.scope: Deactivated successfully.
Nov 26 07:37:51 np0005536586 ansible-async_wrapper.py[80405]: Module complete (80405)
Nov 26 07:37:51 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 26 07:37:51 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 26 07:37:51 np0005536586 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 07:37:52 np0005536586 python3[81213]: ansible-ansible.legacy.async_status Invoked with jid=j939461483014.80319 mode=status _async_dir=/root/.ansible_async
Nov 26 07:37:52 np0005536586 python3[81407]: ansible-ansible.legacy.async_status Invoked with jid=j939461483014.80319 mode=cleanup _async_dir=/root/.ansible_async
Nov 26 07:37:52 np0005536586 ceph-mon[74966]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 26 07:37:52 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.client.admin.keyring
Nov 26 07:37:52 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.client.admin.keyring
Nov 26 07:37:52 np0005536586 python3[81658]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 07:37:53 np0005536586 python3[81911]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:37:53 np0005536586 podman[81985]: 2025-11-26 12:37:53.111231429 +0000 UTC m=+0.031232143 container create 1427a43566c4e8351160046301d5541b03039e69683fc7d2dc72fae9f9c36a27 (image=quay.io/ceph/ceph:v18, name=friendly_chandrasekhar, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 07:37:53 np0005536586 systemd[1]: Started libpod-conmon-1427a43566c4e8351160046301d5541b03039e69683fc7d2dc72fae9f9c36a27.scope.
Nov 26 07:37:53 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7447f506c7d41e57054d439d5a4224e49b1cfe99569efff723b07cb9a5992995/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7447f506c7d41e57054d439d5a4224e49b1cfe99569efff723b07cb9a5992995/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7447f506c7d41e57054d439d5a4224e49b1cfe99569efff723b07cb9a5992995/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:53 np0005536586 podman[81985]: 2025-11-26 12:37:53.164471593 +0000 UTC m=+0.084472316 container init 1427a43566c4e8351160046301d5541b03039e69683fc7d2dc72fae9f9c36a27 (image=quay.io/ceph/ceph:v18, name=friendly_chandrasekhar, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:37:53 np0005536586 podman[81985]: 2025-11-26 12:37:53.168699449 +0000 UTC m=+0.088700153 container start 1427a43566c4e8351160046301d5541b03039e69683fc7d2dc72fae9f9c36a27 (image=quay.io/ceph/ceph:v18, name=friendly_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 07:37:53 np0005536586 podman[81985]: 2025-11-26 12:37:53.170279898 +0000 UTC m=+0.090280602 container attach 1427a43566c4e8351160046301d5541b03039e69683fc7d2dc72fae9f9c36a27 (image=quay.io/ceph/ceph:v18, name=friendly_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 07:37:53 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:37:53 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:53 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:37:53 np0005536586 podman[81985]: 2025-11-26 12:37:53.097891231 +0000 UTC m=+0.017891966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:53 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:53 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:37:53 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:53 np0005536586 ceph-mgr[75236]: [progress INFO root] update: starting ev f99edd48-4aaf-4291-87e3-7332ae42a40e (Updating crash deployment (+1 -> 1))
Nov 26 07:37:53 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 26 07:37:53 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 26 07:37:53 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 26 07:37:53 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:37:53 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:37:53 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 26 07:37:53 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 26 07:37:53 np0005536586 podman[82206]: 2025-11-26 12:37:53.594316002 +0000 UTC m=+0.027806538 container create 3464f3f248a3996b7cf9cca10925d4d832c9bef3aee069672cf928b295da51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 07:37:53 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 07:37:53 np0005536586 friendly_chandrasekhar[82024]: 
Nov 26 07:37:53 np0005536586 friendly_chandrasekhar[82024]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 26 07:37:53 np0005536586 systemd[1]: Started libpod-conmon-3464f3f248a3996b7cf9cca10925d4d832c9bef3aee069672cf928b295da51ce.scope.
Nov 26 07:37:53 np0005536586 podman[81985]: 2025-11-26 12:37:53.624439806 +0000 UTC m=+0.544440509 container died 1427a43566c4e8351160046301d5541b03039e69683fc7d2dc72fae9f9c36a27 (image=quay.io/ceph/ceph:v18, name=friendly_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 07:37:53 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:53 np0005536586 systemd[1]: libpod-1427a43566c4e8351160046301d5541b03039e69683fc7d2dc72fae9f9c36a27.scope: Deactivated successfully.
Nov 26 07:37:53 np0005536586 podman[82206]: 2025-11-26 12:37:53.63799015 +0000 UTC m=+0.071480696 container init 3464f3f248a3996b7cf9cca10925d4d832c9bef3aee069672cf928b295da51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 07:37:53 np0005536586 systemd[1]: var-lib-containers-storage-overlay-7447f506c7d41e57054d439d5a4224e49b1cfe99569efff723b07cb9a5992995-merged.mount: Deactivated successfully.
Nov 26 07:37:53 np0005536586 podman[82206]: 2025-11-26 12:37:53.646075515 +0000 UTC m=+0.079566052 container start 3464f3f248a3996b7cf9cca10925d4d832c9bef3aee069672cf928b295da51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:53 np0005536586 eager_newton[82221]: 167 167
Nov 26 07:37:53 np0005536586 systemd[1]: libpod-3464f3f248a3996b7cf9cca10925d4d832c9bef3aee069672cf928b295da51ce.scope: Deactivated successfully.
Nov 26 07:37:53 np0005536586 podman[82206]: 2025-11-26 12:37:53.647663798 +0000 UTC m=+0.081154325 container attach 3464f3f248a3996b7cf9cca10925d4d832c9bef3aee069672cf928b295da51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 07:37:53 np0005536586 podman[82206]: 2025-11-26 12:37:53.648726941 +0000 UTC m=+0.082217497 container died 3464f3f248a3996b7cf9cca10925d4d832c9bef3aee069672cf928b295da51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 07:37:53 np0005536586 podman[81985]: 2025-11-26 12:37:53.65755151 +0000 UTC m=+0.577552214 container remove 1427a43566c4e8351160046301d5541b03039e69683fc7d2dc72fae9f9c36a27 (image=quay.io/ceph/ceph:v18, name=friendly_chandrasekhar, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:53 np0005536586 systemd[1]: libpod-conmon-1427a43566c4e8351160046301d5541b03039e69683fc7d2dc72fae9f9c36a27.scope: Deactivated successfully.
Nov 26 07:37:53 np0005536586 systemd[1]: var-lib-containers-storage-overlay-c7b373b2296b9038e7e62c75974c8cf544f9eec40cdcc4852a9cf5a1eed59eb5-merged.mount: Deactivated successfully.
Nov 26 07:37:53 np0005536586 podman[82206]: 2025-11-26 12:37:53.675528024 +0000 UTC m=+0.109018560 container remove 3464f3f248a3996b7cf9cca10925d4d832c9bef3aee069672cf928b295da51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 07:37:53 np0005536586 podman[82206]: 2025-11-26 12:37:53.58260281 +0000 UTC m=+0.016093366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:37:53 np0005536586 systemd[1]: libpod-conmon-3464f3f248a3996b7cf9cca10925d4d832c9bef3aee069672cf928b295da51ce.scope: Deactivated successfully.
Nov 26 07:37:53 np0005536586 systemd[1]: Reloading.
Nov 26 07:37:53 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:37:53 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:37:53 np0005536586 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 07:37:53 np0005536586 systemd[1]: Reloading.
Nov 26 07:37:53 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:37:53 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:37:54 np0005536586 python3[82310]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:37:54 np0005536586 podman[82348]: 2025-11-26 12:37:54.074737026 +0000 UTC m=+0.030544989 container create d19d41f6609babaab98a29e361bf2b3c4ae3ad440bc131d314d29a144f87e3ea (image=quay.io/ceph/ceph:v18, name=focused_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:37:54 np0005536586 systemd[1]: Started libpod-conmon-d19d41f6609babaab98a29e361bf2b3c4ae3ad440bc131d314d29a144f87e3ea.scope.
Nov 26 07:37:54 np0005536586 systemd[1]: Starting Ceph crash.compute-0 for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 07:37:54 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6decc99640cb07cf5b398eacadb07d95916c2d3a333c0fd9a07e6c9e328155d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6decc99640cb07cf5b398eacadb07d95916c2d3a333c0fd9a07e6c9e328155d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6decc99640cb07cf5b398eacadb07d95916c2d3a333c0fd9a07e6c9e328155d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:54 np0005536586 podman[82348]: 2025-11-26 12:37:54.152866209 +0000 UTC m=+0.108674162 container init d19d41f6609babaab98a29e361bf2b3c4ae3ad440bc131d314d29a144f87e3ea (image=quay.io/ceph/ceph:v18, name=focused_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 07:37:54 np0005536586 podman[82348]: 2025-11-26 12:37:54.158384809 +0000 UTC m=+0.114192762 container start d19d41f6609babaab98a29e361bf2b3c4ae3ad440bc131d314d29a144f87e3ea (image=quay.io/ceph/ceph:v18, name=focused_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:37:54 np0005536586 podman[82348]: 2025-11-26 12:37:54.062044328 +0000 UTC m=+0.017852301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:54 np0005536586 podman[82348]: 2025-11-26 12:37:54.159480093 +0000 UTC m=+0.115288045 container attach d19d41f6609babaab98a29e361bf2b3c4ae3ad440bc131d314d29a144f87e3ea (image=quay.io/ceph/ceph:v18, name=focused_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: Updating compute-0:/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.client.admin.keyring
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: Deploying daemon crash.compute-0 on compute-0
Nov 26 07:37:54 np0005536586 podman[82406]: 2025-11-26 12:37:54.280904579 +0000 UTC m=+0.027233066 container create 3e7332a87e083e4328d645407351a983becb2661b8a10c2f82ef55cf9ce593fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:37:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53dbad30a11e7ccdbe9e150371edd8de21f2106ca7348ef8439bea6efbfa23b2/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53dbad30a11e7ccdbe9e150371edd8de21f2106ca7348ef8439bea6efbfa23b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53dbad30a11e7ccdbe9e150371edd8de21f2106ca7348ef8439bea6efbfa23b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53dbad30a11e7ccdbe9e150371edd8de21f2106ca7348ef8439bea6efbfa23b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:54 np0005536586 podman[82406]: 2025-11-26 12:37:54.328182718 +0000 UTC m=+0.074511216 container init 3e7332a87e083e4328d645407351a983becb2661b8a10c2f82ef55cf9ce593fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:54 np0005536586 podman[82406]: 2025-11-26 12:37:54.332001424 +0000 UTC m=+0.078329911 container start 3e7332a87e083e4328d645407351a983becb2661b8a10c2f82ef55cf9ce593fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 07:37:54 np0005536586 bash[82406]: 3e7332a87e083e4328d645407351a983becb2661b8a10c2f82ef55cf9ce593fe
Nov 26 07:37:54 np0005536586 podman[82406]: 2025-11-26 12:37:54.268502108 +0000 UTC m=+0.014830616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:37:54 np0005536586 systemd[1]: Started Ceph crash.compute-0 for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:54 np0005536586 ceph-mgr[75236]: [progress INFO root] complete: finished ev f99edd48-4aaf-4291-87e3-7332ae42a40e (Updating crash deployment (+1 -> 1))
Nov 26 07:37:54 np0005536586 ceph-mgr[75236]: [progress INFO root] Completed event f99edd48-4aaf-4291-87e3-7332ae42a40e (Updating crash deployment (+1 -> 1)) in 1 seconds
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:54 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 93719bf2-5da3-4d7c-8f4f-016218e16b1c does not exist
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:54 np0005536586 ceph-mgr[75236]: [progress INFO root] update: starting ev 6d0396db-e6a8-4d65-a474-a68fcf25b60b (Updating mgr deployment (+1 -> 2))
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.aefzvx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.aefzvx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.aefzvx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:37:54 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.aefzvx on compute-0
Nov 26 07:37:54 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.aefzvx on compute-0
Nov 26 07:37:54 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0[82418]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 26 07:37:54 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/469833143' entity='client.admin' 
Nov 26 07:37:54 np0005536586 systemd[1]: libpod-d19d41f6609babaab98a29e361bf2b3c4ae3ad440bc131d314d29a144f87e3ea.scope: Deactivated successfully.
Nov 26 07:37:54 np0005536586 podman[82348]: 2025-11-26 12:37:54.645497532 +0000 UTC m=+0.601305495 container died d19d41f6609babaab98a29e361bf2b3c4ae3ad440bc131d314d29a144f87e3ea (image=quay.io/ceph/ceph:v18, name=focused_robinson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:37:54 np0005536586 systemd[1]: var-lib-containers-storage-overlay-d6decc99640cb07cf5b398eacadb07d95916c2d3a333c0fd9a07e6c9e328155d-merged.mount: Deactivated successfully.
Nov 26 07:37:54 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0[82418]: 2025-11-26T12:37:54.659+0000 7f48658a2640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 26 07:37:54 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0[82418]: 2025-11-26T12:37:54.659+0000 7f48658a2640 -1 AuthRegistry(0x7f4860066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 26 07:37:54 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0[82418]: 2025-11-26T12:37:54.663+0000 7f48658a2640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 26 07:37:54 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0[82418]: 2025-11-26T12:37:54.663+0000 7f48658a2640 -1 AuthRegistry(0x7f48658a1000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 26 07:37:54 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0[82418]: 2025-11-26T12:37:54.664+0000 7f485effd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 26 07:37:54 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0[82418]: 2025-11-26T12:37:54.664+0000 7f48658a2640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 26 07:37:54 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0[82418]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 26 07:37:54 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0[82418]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 26 07:37:54 np0005536586 podman[82348]: 2025-11-26 12:37:54.675639219 +0000 UTC m=+0.631447172 container remove d19d41f6609babaab98a29e361bf2b3c4ae3ad440bc131d314d29a144f87e3ea (image=quay.io/ceph/ceph:v18, name=focused_robinson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 07:37:54 np0005536586 systemd[1]: libpod-conmon-d19d41f6609babaab98a29e361bf2b3c4ae3ad440bc131d314d29a144f87e3ea.scope: Deactivated successfully.
Nov 26 07:37:54 np0005536586 podman[82621]: 2025-11-26 12:37:54.814867125 +0000 UTC m=+0.028819137 container create 06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 26 07:37:54 np0005536586 systemd[1]: Started libpod-conmon-06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4.scope.
Nov 26 07:37:54 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:54 np0005536586 podman[82621]: 2025-11-26 12:37:54.871344527 +0000 UTC m=+0.085296529 container init 06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 07:37:54 np0005536586 podman[82621]: 2025-11-26 12:37:54.876640287 +0000 UTC m=+0.090592290 container start 06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 07:37:54 np0005536586 podman[82621]: 2025-11-26 12:37:54.877806635 +0000 UTC m=+0.091758647 container attach 06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:37:54 np0005536586 jovial_stonebraker[82639]: 167 167
Nov 26 07:37:54 np0005536586 systemd[1]: libpod-06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4.scope: Deactivated successfully.
Nov 26 07:37:54 np0005536586 conmon[82639]: conmon 06588c7a27403c10469b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4.scope/container/memory.events
Nov 26 07:37:54 np0005536586 podman[82621]: 2025-11-26 12:37:54.881497059 +0000 UTC m=+0.095449091 container died 06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 07:37:54 np0005536586 systemd[1]: var-lib-containers-storage-overlay-d04d48a88418ce45e2fbbe541a9d3d27eabb85063d8a7aa8c6bd5632d1806445-merged.mount: Deactivated successfully.
Nov 26 07:37:54 np0005536586 podman[82621]: 2025-11-26 12:37:54.898451206 +0000 UTC m=+0.112403209 container remove 06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 07:37:54 np0005536586 podman[82621]: 2025-11-26 12:37:54.802885257 +0000 UTC m=+0.016837269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:37:54 np0005536586 systemd[1]: libpod-conmon-06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4.scope: Deactivated successfully.
Nov 26 07:37:54 np0005536586 python3[82634]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:37:54 np0005536586 systemd[1]: Reloading.
Nov 26 07:37:54 np0005536586 podman[82658]: 2025-11-26 12:37:54.974221504 +0000 UTC m=+0.041853678 container create da8ad798cc794016099398f8293236389accf04268c3dd52527478d7f203e435 (image=quay.io/ceph/ceph:v18, name=trusting_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 07:37:54 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:37:54 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:37:55 np0005536586 podman[82658]: 2025-11-26 12:37:54.955553067 +0000 UTC m=+0.023185260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:55 np0005536586 systemd[1]: Started libpod-conmon-da8ad798cc794016099398f8293236389accf04268c3dd52527478d7f203e435.scope.
Nov 26 07:37:55 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:55 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85fd24533ee260b91034343713d7f199970411badf127ec0a0d6c9368882659e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:55 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85fd24533ee260b91034343713d7f199970411badf127ec0a0d6c9368882659e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:55 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85fd24533ee260b91034343713d7f199970411badf127ec0a0d6c9368882659e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:55 np0005536586 podman[82658]: 2025-11-26 12:37:55.169557426 +0000 UTC m=+0.237189610 container init da8ad798cc794016099398f8293236389accf04268c3dd52527478d7f203e435 (image=quay.io/ceph/ceph:v18, name=trusting_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 07:37:55 np0005536586 systemd[1]: Reloading.
Nov 26 07:37:55 np0005536586 podman[82658]: 2025-11-26 12:37:55.175248701 +0000 UTC m=+0.242880875 container start da8ad798cc794016099398f8293236389accf04268c3dd52527478d7f203e435 (image=quay.io/ceph/ceph:v18, name=trusting_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:37:55 np0005536586 podman[82658]: 2025-11-26 12:37:55.176461727 +0000 UTC m=+0.244093901 container attach da8ad798cc794016099398f8293236389accf04268c3dd52527478d7f203e435 (image=quay.io/ceph/ceph:v18, name=trusting_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.aefzvx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.aefzvx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/469833143' entity='client.admin' 
Nov 26 07:37:55 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:37:55 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:37:55 np0005536586 systemd[1]: Starting Ceph mgr.compute-0.aefzvx for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 07:37:55 np0005536586 podman[82807]: 2025-11-26 12:37:55.562883533 +0000 UTC m=+0.027665724 container create f14f588e6359d1a1f43de32f04f7f36967054db3683461e573c0cd77ce08f800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-aefzvx, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:37:55 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559938e686326156312ac3a8977cc64ca7160ce184cad6a016d72de0fb23e644/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:55 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559938e686326156312ac3a8977cc64ca7160ce184cad6a016d72de0fb23e644/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:55 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559938e686326156312ac3a8977cc64ca7160ce184cad6a016d72de0fb23e644/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:55 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559938e686326156312ac3a8977cc64ca7160ce184cad6a016d72de0fb23e644/merged/var/lib/ceph/mgr/ceph-compute-0.aefzvx supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:55 np0005536586 podman[82807]: 2025-11-26 12:37:55.608877161 +0000 UTC m=+0.073659361 container init f14f588e6359d1a1f43de32f04f7f36967054db3683461e573c0cd77ce08f800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-aefzvx, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:55 np0005536586 podman[82807]: 2025-11-26 12:37:55.613223943 +0000 UTC m=+0.078006143 container start f14f588e6359d1a1f43de32f04f7f36967054db3683461e573c0cd77ce08f800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-aefzvx, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 07:37:55 np0005536586 bash[82807]: f14f588e6359d1a1f43de32f04f7f36967054db3683461e573c0cd77ce08f800
Nov 26 07:37:55 np0005536586 podman[82807]: 2025-11-26 12:37:55.550845058 +0000 UTC m=+0.015627268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:37:55 np0005536586 systemd[1]: Started Ceph mgr.compute-0.aefzvx for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2954090457' entity='client.admin' 
Nov 26 07:37:55 np0005536586 ceph-mgr[82825]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 07:37:55 np0005536586 ceph-mgr[82825]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 26 07:37:55 np0005536586 systemd[1]: libpod-da8ad798cc794016099398f8293236389accf04268c3dd52527478d7f203e435.scope: Deactivated successfully.
Nov 26 07:37:55 np0005536586 podman[82658]: 2025-11-26 12:37:55.647866191 +0000 UTC m=+0.715498375 container died da8ad798cc794016099398f8293236389accf04268c3dd52527478d7f203e435 (image=quay.io/ceph/ceph:v18, name=trusting_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 07:37:55 np0005536586 ceph-mgr[82825]: pidfile_write: ignore empty --pid-file
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 07:37:55 np0005536586 systemd[1]: var-lib-containers-storage-overlay-85fd24533ee260b91034343713d7f199970411badf127ec0a0d6c9368882659e-merged.mount: Deactivated successfully.
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:55 np0005536586 ceph-mgr[75236]: [progress INFO root] complete: finished ev 6d0396db-e6a8-4d65-a474-a68fcf25b60b (Updating mgr deployment (+1 -> 2))
Nov 26 07:37:55 np0005536586 ceph-mgr[75236]: [progress INFO root] Completed event 6d0396db-e6a8-4d65-a474-a68fcf25b60b (Updating mgr deployment (+1 -> 2)) in 1 seconds
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:55 np0005536586 podman[82658]: 2025-11-26 12:37:55.683539211 +0000 UTC m=+0.751171385 container remove da8ad798cc794016099398f8293236389accf04268c3dd52527478d7f203e435 (image=quay.io/ceph/ceph:v18, name=trusting_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 07:37:55 np0005536586 systemd[1]: libpod-conmon-da8ad798cc794016099398f8293236389accf04268c3dd52527478d7f203e435.scope: Deactivated successfully.
Nov 26 07:37:55 np0005536586 ceph-mgr[82825]: mgr[py] Loading python module 'alerts'
Nov 26 07:37:55 np0005536586 ansible-async_wrapper.py[80400]: Done in kid B.
Nov 26 07:37:55 np0005536586 ceph-mgr[75236]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 26 07:37:55 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 26 07:37:55 np0005536586 ceph-mgr[75236]: [progress INFO root] Writing back 2 completed events
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:37:55 np0005536586 python3[82982]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:37:55 np0005536586 podman[83038]: 2025-11-26 12:37:55.99409555 +0000 UTC m=+0.033057233 container create e1f50bce2d420bf0160e4d09c3e7fa76c6e42976ecd6f7370307f267d389101f (image=quay.io/ceph/ceph:v18, name=elated_nobel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 07:37:56 np0005536586 systemd[1]: Started libpod-conmon-e1f50bce2d420bf0160e4d09c3e7fa76c6e42976ecd6f7370307f267d389101f.scope.
Nov 26 07:37:56 np0005536586 ceph-mgr[82825]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 07:37:56 np0005536586 ceph-mgr[82825]: mgr[py] Loading python module 'balancer'
Nov 26 07:37:56 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-aefzvx[82821]: 2025-11-26T12:37:56.032+0000 7f1de36d7140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 07:37:56 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:56 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df57c8ece4108c6fdc4b35570e835b16ccca8b70c54877daef96b9dd9550b78/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:56 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df57c8ece4108c6fdc4b35570e835b16ccca8b70c54877daef96b9dd9550b78/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:56 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df57c8ece4108c6fdc4b35570e835b16ccca8b70c54877daef96b9dd9550b78/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:56 np0005536586 podman[83038]: 2025-11-26 12:37:56.05042257 +0000 UTC m=+0.089384284 container init e1f50bce2d420bf0160e4d09c3e7fa76c6e42976ecd6f7370307f267d389101f (image=quay.io/ceph/ceph:v18, name=elated_nobel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 07:37:56 np0005536586 podman[83038]: 2025-11-26 12:37:56.055619583 +0000 UTC m=+0.094581276 container start e1f50bce2d420bf0160e4d09c3e7fa76c6e42976ecd6f7370307f267d389101f (image=quay.io/ceph/ceph:v18, name=elated_nobel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:37:56 np0005536586 podman[83038]: 2025-11-26 12:37:56.057207505 +0000 UTC m=+0.096169199 container attach e1f50bce2d420bf0160e4d09c3e7fa76c6e42976ecd6f7370307f267d389101f (image=quay.io/ceph/ceph:v18, name=elated_nobel, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:37:56 np0005536586 podman[83038]: 2025-11-26 12:37:55.981548546 +0000 UTC m=+0.020510260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: Deploying daemon mgr.compute-0.aefzvx on compute-0
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/2954090457' entity='client.admin' 
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:56 np0005536586 ceph-mgr[82825]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 07:37:56 np0005536586 ceph-mgr[82825]: mgr[py] Loading python module 'cephadm'
Nov 26 07:37:56 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-aefzvx[82821]: 2025-11-26T12:37:56.253+0000 7f1de36d7140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 07:37:56 np0005536586 podman[83111]: 2025-11-26 12:37:56.294896886 +0000 UTC m=+0.046360489 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:37:56 np0005536586 podman[83111]: 2025-11-26 12:37:56.368748208 +0000 UTC m=+0.120211812 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1251202468' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:56 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 4978d9f2-cf2b-46ba-b0b7-2bbd7d587193 does not exist
Nov 26 07:37:56 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 0b517728-5554-4b84-9b31-39c90cb6ded9 does not exist
Nov 26 07:37:56 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev e4ec4243-8d10-4ef0-bb3a-741e9da8caf0 does not exist
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:56 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 26 07:37:56 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:37:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:37:56 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 26 07:37:56 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 26 07:37:57 np0005536586 podman[83367]: 2025-11-26 12:37:57.007508881 +0000 UTC m=+0.027932125 container create cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jackson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:37:57 np0005536586 systemd[1]: Started libpod-conmon-cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655.scope.
Nov 26 07:37:57 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:57 np0005536586 podman[83367]: 2025-11-26 12:37:57.051594995 +0000 UTC m=+0.072018239 container init cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:37:57 np0005536586 podman[83367]: 2025-11-26 12:37:57.056772983 +0000 UTC m=+0.077196226 container start cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:57 np0005536586 blissful_jackson[83381]: 167 167
Nov 26 07:37:57 np0005536586 podman[83367]: 2025-11-26 12:37:57.060175092 +0000 UTC m=+0.080598337 container attach cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 07:37:57 np0005536586 conmon[83381]: conmon cce2928a15384f2d569a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655.scope/container/memory.events
Nov 26 07:37:57 np0005536586 systemd[1]: libpod-cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655.scope: Deactivated successfully.
Nov 26 07:37:57 np0005536586 podman[83367]: 2025-11-26 12:37:57.06136781 +0000 UTC m=+0.081791054 container died cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jackson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 07:37:57 np0005536586 systemd[1]: var-lib-containers-storage-overlay-f578a478bd6c1496ddb1e360430a80454efcee05ce98fbef0b679c8083c9dde9-merged.mount: Deactivated successfully.
Nov 26 07:37:57 np0005536586 podman[83367]: 2025-11-26 12:37:57.092411888 +0000 UTC m=+0.112835132 container remove cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 07:37:57 np0005536586 podman[83367]: 2025-11-26 12:37:56.995845052 +0000 UTC m=+0.016268316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:37:57 np0005536586 systemd[1]: libpod-conmon-cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655.scope: Deactivated successfully.
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:57 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.whkbdn (unknown last config time)...
Nov 26 07:37:57 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.whkbdn (unknown last config time)...
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.whkbdn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.whkbdn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:37:57 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.whkbdn on compute-0
Nov 26 07:37:57 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.whkbdn on compute-0
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/1251202468' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.whkbdn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1251202468' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 26 07:37:57 np0005536586 elated_nobel[83052]: set require_min_compat_client to mimic
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 26 07:37:57 np0005536586 systemd[1]: libpod-e1f50bce2d420bf0160e4d09c3e7fa76c6e42976ecd6f7370307f267d389101f.scope: Deactivated successfully.
Nov 26 07:37:57 np0005536586 podman[83038]: 2025-11-26 12:37:57.228790314 +0000 UTC m=+1.267752017 container died e1f50bce2d420bf0160e4d09c3e7fa76c6e42976ecd6f7370307f267d389101f (image=quay.io/ceph/ceph:v18, name=elated_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 07:37:57 np0005536586 systemd[1]: var-lib-containers-storage-overlay-9df57c8ece4108c6fdc4b35570e835b16ccca8b70c54877daef96b9dd9550b78-merged.mount: Deactivated successfully.
Nov 26 07:37:57 np0005536586 podman[83038]: 2025-11-26 12:37:57.255744406 +0000 UTC m=+1.294706099 container remove e1f50bce2d420bf0160e4d09c3e7fa76c6e42976ecd6f7370307f267d389101f (image=quay.io/ceph/ceph:v18, name=elated_nobel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 07:37:57 np0005536586 systemd[1]: libpod-conmon-e1f50bce2d420bf0160e4d09c3e7fa76c6e42976ecd6f7370307f267d389101f.scope: Deactivated successfully.
Nov 26 07:37:57 np0005536586 podman[83533]: 2025-11-26 12:37:57.562350219 +0000 UTC m=+0.030146928 container create bc33641fc4e2c2f195c92d81f84e162192f70476cbcba780750a8afc0da1f9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 07:37:57 np0005536586 systemd[1]: Started libpod-conmon-bc33641fc4e2c2f195c92d81f84e162192f70476cbcba780750a8afc0da1f9a9.scope.
Nov 26 07:37:57 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:57 np0005536586 podman[83533]: 2025-11-26 12:37:57.608066315 +0000 UTC m=+0.075863034 container init bc33641fc4e2c2f195c92d81f84e162192f70476cbcba780750a8afc0da1f9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 07:37:57 np0005536586 podman[83533]: 2025-11-26 12:37:57.614243495 +0000 UTC m=+0.082040203 container start bc33641fc4e2c2f195c92d81f84e162192f70476cbcba780750a8afc0da1f9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banzai, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 07:37:57 np0005536586 podman[83533]: 2025-11-26 12:37:57.615865561 +0000 UTC m=+0.083662270 container attach bc33641fc4e2c2f195c92d81f84e162192f70476cbcba780750a8afc0da1f9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 07:37:57 np0005536586 serene_banzai[83550]: 167 167
Nov 26 07:37:57 np0005536586 podman[83533]: 2025-11-26 12:37:57.61717612 +0000 UTC m=+0.084972830 container died bc33641fc4e2c2f195c92d81f84e162192f70476cbcba780750a8afc0da1f9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:37:57 np0005536586 systemd[1]: libpod-bc33641fc4e2c2f195c92d81f84e162192f70476cbcba780750a8afc0da1f9a9.scope: Deactivated successfully.
Nov 26 07:37:57 np0005536586 systemd[1]: var-lib-containers-storage-overlay-0f7b3c956fa6ba538c24c844caa8d8ca3eda3147aac2ed3dd6174b1d8585d929-merged.mount: Deactivated successfully.
Nov 26 07:37:57 np0005536586 podman[83533]: 2025-11-26 12:37:57.637600067 +0000 UTC m=+0.105396776 container remove bc33641fc4e2c2f195c92d81f84e162192f70476cbcba780750a8afc0da1f9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 07:37:57 np0005536586 podman[83533]: 2025-11-26 12:37:57.548646075 +0000 UTC m=+0.016442795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:37:57 np0005536586 systemd[1]: libpod-conmon-bc33641fc4e2c2f195c92d81f84e162192f70476cbcba780750a8afc0da1f9a9.scope: Deactivated successfully.
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:37:57 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:57 np0005536586 python3[83582]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:37:57 np0005536586 podman[83638]: 2025-11-26 12:37:57.797622946 +0000 UTC m=+0.030723353 container create 5ab36972bf96b7f9bdb3f2da8854efdd108611c1873282a37ddb53ee5ef5101f (image=quay.io/ceph/ceph:v18, name=jovial_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 26 07:37:57 np0005536586 systemd[1]: Started libpod-conmon-5ab36972bf96b7f9bdb3f2da8854efdd108611c1873282a37ddb53ee5ef5101f.scope.
Nov 26 07:37:57 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:57 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d503511bfccc15a1086f6f507fb0f8fa5cb81e727cae50dd7b3bf6b3fc51c4f0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:57 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d503511bfccc15a1086f6f507fb0f8fa5cb81e727cae50dd7b3bf6b3fc51c4f0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:57 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d503511bfccc15a1086f6f507fb0f8fa5cb81e727cae50dd7b3bf6b3fc51c4f0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:57 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 07:37:57 np0005536586 podman[83638]: 2025-11-26 12:37:57.846419427 +0000 UTC m=+0.079519834 container init 5ab36972bf96b7f9bdb3f2da8854efdd108611c1873282a37ddb53ee5ef5101f (image=quay.io/ceph/ceph:v18, name=jovial_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:57 np0005536586 podman[83638]: 2025-11-26 12:37:57.85252928 +0000 UTC m=+0.085629677 container start 5ab36972bf96b7f9bdb3f2da8854efdd108611c1873282a37ddb53ee5ef5101f (image=quay.io/ceph/ceph:v18, name=jovial_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 07:37:57 np0005536586 podman[83638]: 2025-11-26 12:37:57.853836614 +0000 UTC m=+0.086937011 container attach 5ab36972bf96b7f9bdb3f2da8854efdd108611c1873282a37ddb53ee5ef5101f (image=quay.io/ceph/ceph:v18, name=jovial_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:57 np0005536586 podman[83638]: 2025-11-26 12:37:57.78662922 +0000 UTC m=+0.019729637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:57 np0005536586 ceph-mgr[82825]: mgr[py] Loading python module 'crash'
Nov 26 07:37:58 np0005536586 ceph-mgr[82825]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 07:37:58 np0005536586 ceph-mgr[82825]: mgr[py] Loading python module 'dashboard'
Nov 26 07:37:58 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-aefzvx[82821]: 2025-11-26T12:37:58.164+0000 7f1de36d7140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: Reconfiguring mgr.compute-0.whkbdn (unknown last config time)...
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: Reconfiguring daemon mgr.compute-0.whkbdn on compute-0
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/1251202468' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:58 np0005536586 podman[83778]: 2025-11-26 12:37:58.238087649 +0000 UTC m=+0.043343576 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:37:58 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:37:58 np0005536586 podman[83778]: 2025-11-26 12:37:58.321035262 +0000 UTC m=+0.126291168 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:58 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev dbb7ecfc-dd9a-4860-bb18-1201c5fedcc7 does not exist
Nov 26 07:37:58 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 9796111b-7f3b-445b-a875-4db25f177569 does not exist
Nov 26 07:37:58 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev e471cedc-a121-4f4c-ab0f-1d8fdb4853f2 does not exist
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:58 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Added host compute-0
Nov 26 07:37:58 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 26 07:37:58 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 26 07:37:58 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:58 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 26 07:37:58 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:58 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 57414f9b-0314-4361-be85-7eb871bf57d4 does not exist
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 26 07:37:58 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 26 07:37:58 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 26 07:37:58 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 26 07:37:58 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 26 07:37:58 np0005536586 ceph-mgr[75236]: [progress INFO root] update: starting ev 13ef4d25-9675-4f68-842f-f29e4ba7da32 (Updating mgr deployment (-1 -> 1))
Nov 26 07:37:58 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.aefzvx from compute-0 -- ports [8765]
Nov 26 07:37:58 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.aefzvx from compute-0 -- ports [8765]
Nov 26 07:37:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:58 np0005536586 jovial_cartwright[83676]: Added host 'compute-0' with addr '192.168.122.100'
Nov 26 07:37:58 np0005536586 jovial_cartwright[83676]: Scheduled mon update...
Nov 26 07:37:58 np0005536586 jovial_cartwright[83676]: Scheduled mgr update...
Nov 26 07:37:58 np0005536586 jovial_cartwright[83676]: Scheduled osd.default_drive_group update...
Nov 26 07:37:58 np0005536586 systemd[1]: libpod-5ab36972bf96b7f9bdb3f2da8854efdd108611c1873282a37ddb53ee5ef5101f.scope: Deactivated successfully.
Nov 26 07:37:58 np0005536586 podman[84033]: 2025-11-26 12:37:58.828803357 +0000 UTC m=+0.016691463 container died 5ab36972bf96b7f9bdb3f2da8854efdd108611c1873282a37ddb53ee5ef5101f (image=quay.io/ceph/ceph:v18, name=jovial_cartwright, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 07:37:58 np0005536586 systemd[1]: var-lib-containers-storage-overlay-d503511bfccc15a1086f6f507fb0f8fa5cb81e727cae50dd7b3bf6b3fc51c4f0-merged.mount: Deactivated successfully.
Nov 26 07:37:58 np0005536586 podman[84033]: 2025-11-26 12:37:58.855188166 +0000 UTC m=+0.043076272 container remove 5ab36972bf96b7f9bdb3f2da8854efdd108611c1873282a37ddb53ee5ef5101f (image=quay.io/ceph/ceph:v18, name=jovial_cartwright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:37:58 np0005536586 systemd[1]: libpod-conmon-5ab36972bf96b7f9bdb3f2da8854efdd108611c1873282a37ddb53ee5ef5101f.scope: Deactivated successfully.
Nov 26 07:37:59 np0005536586 python3[84157]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:37:59 np0005536586 systemd[1]: Stopping Ceph mgr.compute-0.aefzvx for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 07:37:59 np0005536586 podman[84190]: 2025-11-26 12:37:59.213417016 +0000 UTC m=+0.028961244 container create 91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18 (image=quay.io/ceph/ceph:v18, name=compassionate_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:59 np0005536586 systemd[1]: Started libpod-conmon-91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18.scope.
Nov 26 07:37:59 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:37:59 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2346db4741e3c3384eb9ffbaa68dadbcecff621a873cc78f5fdfcfd2f0d69362/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:59 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2346db4741e3c3384eb9ffbaa68dadbcecff621a873cc78f5fdfcfd2f0d69362/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:59 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2346db4741e3c3384eb9ffbaa68dadbcecff621a873cc78f5fdfcfd2f0d69362/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 07:37:59 np0005536586 podman[84190]: 2025-11-26 12:37:59.266170482 +0000 UTC m=+0.081714720 container init 91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18 (image=quay.io/ceph/ceph:v18, name=compassionate_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 07:37:59 np0005536586 podman[84190]: 2025-11-26 12:37:59.272267141 +0000 UTC m=+0.087811369 container start 91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18 (image=quay.io/ceph/ceph:v18, name=compassionate_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:37:59 np0005536586 podman[84190]: 2025-11-26 12:37:59.273594672 +0000 UTC m=+0.089138900 container attach 91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18 (image=quay.io/ceph/ceph:v18, name=compassionate_goldwasser, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:59 np0005536586 podman[84190]: 2025-11-26 12:37:59.201583797 +0000 UTC m=+0.017128046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:37:59 np0005536586 podman[84228]: 2025-11-26 12:37:59.373104214 +0000 UTC m=+0.044909147 container died f14f588e6359d1a1f43de32f04f7f36967054db3683461e573c0cd77ce08f800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-aefzvx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:37:59 np0005536586 systemd[1]: var-lib-containers-storage-overlay-559938e686326156312ac3a8977cc64ca7160ce184cad6a016d72de0fb23e644-merged.mount: Deactivated successfully.
Nov 26 07:37:59 np0005536586 podman[84228]: 2025-11-26 12:37:59.394645835 +0000 UTC m=+0.066450768 container remove f14f588e6359d1a1f43de32f04f7f36967054db3683461e573c0cd77ce08f800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-aefzvx, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 07:37:59 np0005536586 bash[84228]: ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-aefzvx
Nov 26 07:37:59 np0005536586 systemd[1]: ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@mgr.compute-0.aefzvx.service: Main process exited, code=exited, status=143/n/a
Nov 26 07:37:59 np0005536586 systemd[1]: ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@mgr.compute-0.aefzvx.service: Failed with result 'exit-code'.
Nov 26 07:37:59 np0005536586 systemd[1]: Stopped Ceph mgr.compute-0.aefzvx for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 07:37:59 np0005536586 systemd[1]: ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@mgr.compute-0.aefzvx.service: Consumed 4.176s CPU time.
Nov 26 07:37:59 np0005536586 systemd[1]: Reloading.
Nov 26 07:37:59 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:37:59 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:37:59 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.aefzvx
Nov 26 07:37:59 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.aefzvx
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.aefzvx"} v 0) v1
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.aefzvx"}]: dispatch
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.aefzvx"}]': finished
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:59 np0005536586 ceph-mgr[75236]: [progress INFO root] complete: finished ev 13ef4d25-9675-4f68-842f-f29e4ba7da32 (Updating mgr deployment (-1 -> 1))
Nov 26 07:37:59 np0005536586 ceph-mgr[75236]: [progress INFO root] Completed event 13ef4d25-9675-4f68-842f-f29e4ba7da32 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:37:59 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 6e52240c-0cee-499c-8ce3-442d359e171d does not exist
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 26 07:37:59 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2793246863' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 07:37:59 np0005536586 compassionate_goldwasser[84212]: 
Nov 26 07:37:59 np0005536586 compassionate_goldwasser[84212]: {"fsid":"f7d7fe93-41e5-51c4-b72d-63b38686102e","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":63,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-26T12:36:53.922147+0000","services":{}},"progress_events":{}}
Nov 26 07:37:59 np0005536586 systemd[1]: libpod-91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18.scope: Deactivated successfully.
Nov 26 07:37:59 np0005536586 conmon[84212]: conmon 91e14f4d9b8f90d525cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18.scope/container/memory.events
Nov 26 07:37:59 np0005536586 podman[84190]: 2025-11-26 12:37:59.812268636 +0000 UTC m=+0.627812863 container died 91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18 (image=quay.io/ceph/ceph:v18, name=compassionate_goldwasser, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 07:37:59 np0005536586 systemd[1]: var-lib-containers-storage-overlay-2346db4741e3c3384eb9ffbaa68dadbcecff621a873cc78f5fdfcfd2f0d69362-merged.mount: Deactivated successfully.
Nov 26 07:37:59 np0005536586 podman[84190]: 2025-11-26 12:37:59.837548601 +0000 UTC m=+0.653092830 container remove 91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18 (image=quay.io/ceph/ceph:v18, name=compassionate_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 26 07:37:59 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 07:37:59 np0005536586 systemd[1]: libpod-conmon-91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18.scope: Deactivated successfully.
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: Added host compute-0
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: Saving service mon spec with placement compute-0
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: Saving service mgr spec with placement compute-0
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: Saving service osd.default_drive_group spec with placement compute-0
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: Removing daemon mgr.compute-0.aefzvx from compute-0 -- ports [8765]
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.aefzvx"}]: dispatch
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.aefzvx"}]': finished
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:00 np0005536586 podman[84547]: 2025-11-26 12:38:00.348383236 +0000 UTC m=+0.037098899 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 07:38:00 np0005536586 podman[84547]: 2025-11-26 12:38:00.427951481 +0000 UTC m=+0.116667142 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:00 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 3b189e6a-9256-4375-8ac6-31a8b026514d does not exist
Nov 26 07:38:00 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev f5ef6df5-8cf9-46e6-9182-84070a9e5bc1 does not exist
Nov 26 07:38:00 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 4c9deb10-729a-4bcc-a1fe-44a626703b19 does not exist
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:38:00 np0005536586 ceph-mgr[75236]: [progress INFO root] Writing back 3 completed events
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:38:00 np0005536586 podman[84734]: 2025-11-26 12:38:00.956010962 +0000 UTC m=+0.026877077 container create d3c67952ac350d7470c226d284f2c8c653f84414f43e63e8e30211213226426b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_herschel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 07:38:00 np0005536586 systemd[1]: Started libpod-conmon-d3c67952ac350d7470c226d284f2c8c653f84414f43e63e8e30211213226426b.scope.
Nov 26 07:38:01 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:01 np0005536586 podman[84734]: 2025-11-26 12:38:01.010497464 +0000 UTC m=+0.081363579 container init d3c67952ac350d7470c226d284f2c8c653f84414f43e63e8e30211213226426b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_herschel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 07:38:01 np0005536586 podman[84734]: 2025-11-26 12:38:01.015633563 +0000 UTC m=+0.086499677 container start d3c67952ac350d7470c226d284f2c8c653f84414f43e63e8e30211213226426b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_herschel, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 07:38:01 np0005536586 podman[84734]: 2025-11-26 12:38:01.01679448 +0000 UTC m=+0.087660594 container attach d3c67952ac350d7470c226d284f2c8c653f84414f43e63e8e30211213226426b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 07:38:01 np0005536586 cool_herschel[84747]: 167 167
Nov 26 07:38:01 np0005536586 systemd[1]: libpod-d3c67952ac350d7470c226d284f2c8c653f84414f43e63e8e30211213226426b.scope: Deactivated successfully.
Nov 26 07:38:01 np0005536586 podman[84734]: 2025-11-26 12:38:01.019049289 +0000 UTC m=+0.089915423 container died d3c67952ac350d7470c226d284f2c8c653f84414f43e63e8e30211213226426b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:01 np0005536586 systemd[1]: var-lib-containers-storage-overlay-8c1542da40b741c5f8281c9e10e28b40e228a58ae1498df2c948c904519f07f2-merged.mount: Deactivated successfully.
Nov 26 07:38:01 np0005536586 podman[84734]: 2025-11-26 12:38:01.035024873 +0000 UTC m=+0.105890987 container remove d3c67952ac350d7470c226d284f2c8c653f84414f43e63e8e30211213226426b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:01 np0005536586 podman[84734]: 2025-11-26 12:38:00.944641718 +0000 UTC m=+0.015507852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:01 np0005536586 systemd[1]: libpod-conmon-d3c67952ac350d7470c226d284f2c8c653f84414f43e63e8e30211213226426b.scope: Deactivated successfully.
Nov 26 07:38:01 np0005536586 podman[84769]: 2025-11-26 12:38:01.143690087 +0000 UTC m=+0.025103876 container create 838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:01 np0005536586 systemd[1]: Started libpod-conmon-838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038.scope.
Nov 26 07:38:01 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:01 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f6cc841334fcd28e003b78085738ac6013700560359f409e705c8fff0ab5d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:01 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f6cc841334fcd28e003b78085738ac6013700560359f409e705c8fff0ab5d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:01 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f6cc841334fcd28e003b78085738ac6013700560359f409e705c8fff0ab5d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:01 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f6cc841334fcd28e003b78085738ac6013700560359f409e705c8fff0ab5d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:01 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f6cc841334fcd28e003b78085738ac6013700560359f409e705c8fff0ab5d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:01 np0005536586 podman[84769]: 2025-11-26 12:38:01.205656573 +0000 UTC m=+0.087070372 container init 838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 07:38:01 np0005536586 podman[84769]: 2025-11-26 12:38:01.210014446 +0000 UTC m=+0.091428234 container start 838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_proskuriakova, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 07:38:01 np0005536586 podman[84769]: 2025-11-26 12:38:01.210984904 +0000 UTC m=+0.092398693 container attach 838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_proskuriakova, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:01 np0005536586 ceph-mon[74966]: Removing key for mgr.compute-0.aefzvx
Nov 26 07:38:01 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:01 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:01 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:01 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:01 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:38:01 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:01 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:38:01 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:01 np0005536586 podman[84769]: 2025-11-26 12:38:01.133496959 +0000 UTC m=+0.014910758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:01 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 07:38:02 np0005536586 flamboyant_proskuriakova[84782]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:38:02 np0005536586 flamboyant_proskuriakova[84782]: --> relative data size: 1.0
Nov 26 07:38:02 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 07:38:02 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new ef2b480d-9484-4a2f-b46e-f0af80cc4943
Nov 26 07:38:02 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943"} v 0) v1
Nov 26 07:38:02 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2458524021' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943"}]: dispatch
Nov 26 07:38:02 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 26 07:38:02 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 07:38:02 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2458524021' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943"}]': finished
Nov 26 07:38:02 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 26 07:38:02 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 26 07:38:02 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 07:38:02 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 07:38:02 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 07:38:02 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 07:38:02 np0005536586 lvm[84843]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 07:38:02 np0005536586 lvm[84843]: VG ceph_vg0 finished
Nov 26 07:38:02 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 26 07:38:02 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 26 07:38:02 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 26 07:38:02 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 26 07:38:02 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 26 07:38:02 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 26 07:38:02 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4214126572' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 26 07:38:02 np0005536586 flamboyant_proskuriakova[84782]: stderr: got monmap epoch 1
Nov 26 07:38:02 np0005536586 flamboyant_proskuriakova[84782]: --> Creating keyring file for osd.0
Nov 26 07:38:02 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 26 07:38:02 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 26 07:38:02 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid ef2b480d-9484-4a2f-b46e-f0af80cc4943 --setuser ceph --setgroup ceph
Nov 26 07:38:03 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/2458524021' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943"}]: dispatch
Nov 26 07:38:03 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/2458524021' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943"}]': finished
Nov 26 07:38:03 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 07:38:04 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 26 07:38:04 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 26 07:38:04 np0005536586 ceph-mon[74966]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 26 07:38:04 np0005536586 ceph-mon[74966]: Cluster is now healthy
Nov 26 07:38:04 np0005536586 flamboyant_proskuriakova[84782]: stderr: 2025-11-26T12:38:02.847+0000 7f102af5e740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 07:38:04 np0005536586 flamboyant_proskuriakova[84782]: stderr: 2025-11-26T12:38:02.847+0000 7f102af5e740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 07:38:04 np0005536586 flamboyant_proskuriakova[84782]: stderr: 2025-11-26T12:38:02.848+0000 7f102af5e740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 07:38:04 np0005536586 flamboyant_proskuriakova[84782]: stderr: 2025-11-26T12:38:02.848+0000 7f102af5e740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 26 07:38:04 np0005536586 flamboyant_proskuriakova[84782]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 241a5bb6-a0a2-4f46-939e-db435256704f
Nov 26 07:38:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "241a5bb6-a0a2-4f46-939e-db435256704f"} v 0) v1
Nov 26 07:38:05 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1786557833' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "241a5bb6-a0a2-4f46-939e-db435256704f"}]: dispatch
Nov 26 07:38:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 26 07:38:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 07:38:05 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1786557833' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "241a5bb6-a0a2-4f46-939e-db435256704f"}]': finished
Nov 26 07:38:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 26 07:38:05 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 26 07:38:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 07:38:05 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 07:38:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 07:38:05 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 07:38:05 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 07:38:05 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 07:38:05 np0005536586 lvm[85775]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 07:38:05 np0005536586 lvm[85775]: VG ceph_vg1 finished
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 26 07:38:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 26 07:38:05 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/714457435' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: stderr: got monmap epoch 1
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: --> Creating keyring file for osd.1
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 26 07:38:05 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 241a5bb6-a0a2-4f46-939e-db435256704f --setuser ceph --setgroup ceph
Nov 26 07:38:05 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 07:38:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:38:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:38:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:38:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:38:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:38:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:38:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:38:06 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/1786557833' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "241a5bb6-a0a2-4f46-939e-db435256704f"}]: dispatch
Nov 26 07:38:06 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/1786557833' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "241a5bb6-a0a2-4f46-939e-db435256704f"}]': finished
Nov 26 07:38:07 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 07:38:07 np0005536586 flamboyant_proskuriakova[84782]: stderr: 2025-11-26T12:38:05.863+0000 7ffb26655740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 07:38:07 np0005536586 flamboyant_proskuriakova[84782]: stderr: 2025-11-26T12:38:05.863+0000 7ffb26655740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 07:38:07 np0005536586 flamboyant_proskuriakova[84782]: stderr: 2025-11-26T12:38:05.863+0000 7ffb26655740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 07:38:07 np0005536586 flamboyant_proskuriakova[84782]: stderr: 2025-11-26T12:38:05.863+0000 7ffb26655740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 26 07:38:07 np0005536586 flamboyant_proskuriakova[84782]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 830db782-65d7-4e18-bccf-dab0d5334a8b
Nov 26 07:38:08 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b"} v 0) v1
Nov 26 07:38:08 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3656312750' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b"}]: dispatch
Nov 26 07:38:08 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 26 07:38:08 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 07:38:08 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3656312750' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b"}]': finished
Nov 26 07:38:08 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 26 07:38:08 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 26 07:38:08 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 07:38:08 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 07:38:08 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 07:38:08 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 07:38:08 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 07:38:08 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 07:38:08 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 07:38:08 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 07:38:08 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 07:38:08 np0005536586 lvm[86707]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 07:38:08 np0005536586 lvm[86707]: VG ceph_vg2 finished
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 26 07:38:08 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 26 07:38:08 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3345018760' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: stderr: got monmap epoch 1
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: --> Creating keyring file for osd.2
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 26 07:38:08 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 830db782-65d7-4e18-bccf-dab0d5334a8b --setuser ceph --setgroup ceph
Nov 26 07:38:09 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/3656312750' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b"}]: dispatch
Nov 26 07:38:09 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/3656312750' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b"}]': finished
Nov 26 07:38:09 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 07:38:10 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:38:10 np0005536586 flamboyant_proskuriakova[84782]: stderr: 2025-11-26T12:38:08.874+0000 7f0a84c45740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 07:38:10 np0005536586 flamboyant_proskuriakova[84782]: stderr: 2025-11-26T12:38:08.874+0000 7f0a84c45740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 07:38:10 np0005536586 flamboyant_proskuriakova[84782]: stderr: 2025-11-26T12:38:08.874+0000 7f0a84c45740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 07:38:10 np0005536586 flamboyant_proskuriakova[84782]: stderr: 2025-11-26T12:38:08.874+0000 7f0a84c45740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 26 07:38:10 np0005536586 flamboyant_proskuriakova[84782]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 26 07:38:11 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 07:38:11 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 26 07:38:11 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 26 07:38:11 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 26 07:38:11 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 26 07:38:11 np0005536586 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 07:38:11 np0005536586 flamboyant_proskuriakova[84782]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 26 07:38:11 np0005536586 flamboyant_proskuriakova[84782]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 26 07:38:11 np0005536586 systemd[1]: libpod-838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038.scope: Deactivated successfully.
Nov 26 07:38:11 np0005536586 systemd[1]: libpod-838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038.scope: Consumed 4.085s CPU time.
Nov 26 07:38:11 np0005536586 conmon[84782]: conmon 838032c3aa40db808c90 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038.scope/container/memory.events
Nov 26 07:38:11 np0005536586 podman[87609]: 2025-11-26 12:38:11.131892388 +0000 UTC m=+0.018946197 container died 838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 07:38:11 np0005536586 systemd[1]: var-lib-containers-storage-overlay-11f6cc841334fcd28e003b78085738ac6013700560359f409e705c8fff0ab5d1-merged.mount: Deactivated successfully.
Nov 26 07:38:11 np0005536586 podman[87609]: 2025-11-26 12:38:11.166181755 +0000 UTC m=+0.053235543 container remove 838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 07:38:11 np0005536586 systemd[1]: libpod-conmon-838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038.scope: Deactivated successfully.
Nov 26 07:38:11 np0005536586 podman[87751]: 2025-11-26 12:38:11.571037247 +0000 UTC m=+0.024449169 container create 8d162977a3f29239cf077b028780579d528ba977987d9441898d8766d658bd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:11 np0005536586 systemd[1]: Started libpod-conmon-8d162977a3f29239cf077b028780579d528ba977987d9441898d8766d658bd02.scope.
Nov 26 07:38:11 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:11 np0005536586 podman[87751]: 2025-11-26 12:38:11.616982776 +0000 UTC m=+0.070394708 container init 8d162977a3f29239cf077b028780579d528ba977987d9441898d8766d658bd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 07:38:11 np0005536586 podman[87751]: 2025-11-26 12:38:11.621243308 +0000 UTC m=+0.074655230 container start 8d162977a3f29239cf077b028780579d528ba977987d9441898d8766d658bd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 07:38:11 np0005536586 podman[87751]: 2025-11-26 12:38:11.622552866 +0000 UTC m=+0.075964788 container attach 8d162977a3f29239cf077b028780579d528ba977987d9441898d8766d658bd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:11 np0005536586 clever_goodall[87764]: 167 167
Nov 26 07:38:11 np0005536586 systemd[1]: libpod-8d162977a3f29239cf077b028780579d528ba977987d9441898d8766d658bd02.scope: Deactivated successfully.
Nov 26 07:38:11 np0005536586 podman[87751]: 2025-11-26 12:38:11.624615307 +0000 UTC m=+0.078027230 container died 8d162977a3f29239cf077b028780579d528ba977987d9441898d8766d658bd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goodall, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 07:38:11 np0005536586 systemd[1]: var-lib-containers-storage-overlay-43fb2278a5b006dc12500b7d5251b031172ee98c7dc2ca7ffcb15b2eae6d4714-merged.mount: Deactivated successfully.
Nov 26 07:38:11 np0005536586 podman[87751]: 2025-11-26 12:38:11.642132661 +0000 UTC m=+0.095544583 container remove 8d162977a3f29239cf077b028780579d528ba977987d9441898d8766d658bd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 07:38:11 np0005536586 podman[87751]: 2025-11-26 12:38:11.561409331 +0000 UTC m=+0.014821253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:11 np0005536586 systemd[1]: libpod-conmon-8d162977a3f29239cf077b028780579d528ba977987d9441898d8766d658bd02.scope: Deactivated successfully.
Nov 26 07:38:11 np0005536586 podman[87786]: 2025-11-26 12:38:11.750673752 +0000 UTC m=+0.027722283 container create 5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:11 np0005536586 systemd[1]: Started libpod-conmon-5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682.scope.
Nov 26 07:38:11 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:11 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c64569f8dc0b05e5bf31d462fe794e85639caee12b037fad0d1090672de2325/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:11 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c64569f8dc0b05e5bf31d462fe794e85639caee12b037fad0d1090672de2325/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:11 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c64569f8dc0b05e5bf31d462fe794e85639caee12b037fad0d1090672de2325/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:11 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c64569f8dc0b05e5bf31d462fe794e85639caee12b037fad0d1090672de2325/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:11 np0005536586 podman[87786]: 2025-11-26 12:38:11.809436662 +0000 UTC m=+0.086485192 container init 5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:11 np0005536586 podman[87786]: 2025-11-26 12:38:11.814819046 +0000 UTC m=+0.091867576 container start 5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:11 np0005536586 podman[87786]: 2025-11-26 12:38:11.816135998 +0000 UTC m=+0.093184548 container attach 5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 07:38:11 np0005536586 podman[87786]: 2025-11-26 12:38:11.738047574 +0000 UTC m=+0.015096123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:11 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]: {
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:    "0": [
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:        {
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "devices": [
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "/dev/loop3"
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            ],
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "lv_name": "ceph_lv0",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "lv_size": "21470642176",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "name": "ceph_lv0",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "tags": {
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.cluster_name": "ceph",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.crush_device_class": "",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.encrypted": "0",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.osd_id": "0",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.type": "block",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.vdo": "0"
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            },
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "type": "block",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "vg_name": "ceph_vg0"
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:        }
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:    ],
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:    "1": [
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:        {
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "devices": [
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "/dev/loop4"
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            ],
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "lv_name": "ceph_lv1",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "lv_size": "21470642176",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "name": "ceph_lv1",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "tags": {
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.cluster_name": "ceph",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.crush_device_class": "",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.encrypted": "0",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.osd_id": "1",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.type": "block",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.vdo": "0"
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            },
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "type": "block",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "vg_name": "ceph_vg1"
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:        }
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:    ],
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:    "2": [
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:        {
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "devices": [
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "/dev/loop5"
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            ],
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "lv_name": "ceph_lv2",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "lv_size": "21470642176",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "name": "ceph_lv2",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "tags": {
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.cluster_name": "ceph",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.crush_device_class": "",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.encrypted": "0",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.osd_id": "2",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.type": "block",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:                "ceph.vdo": "0"
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            },
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "type": "block",
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:            "vg_name": "ceph_vg2"
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:        }
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]:    ]
Nov 26 07:38:12 np0005536586 upbeat_hellman[87799]: }
Nov 26 07:38:12 np0005536586 systemd[1]: libpod-5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682.scope: Deactivated successfully.
Nov 26 07:38:12 np0005536586 conmon[87799]: conmon 5dbb57fd5389f5646a0f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682.scope/container/memory.events
Nov 26 07:38:12 np0005536586 podman[87786]: 2025-11-26 12:38:12.443531439 +0000 UTC m=+0.720579969 container died 5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:12 np0005536586 systemd[1]: var-lib-containers-storage-overlay-2c64569f8dc0b05e5bf31d462fe794e85639caee12b037fad0d1090672de2325-merged.mount: Deactivated successfully.
Nov 26 07:38:12 np0005536586 podman[87786]: 2025-11-26 12:38:12.472945531 +0000 UTC m=+0.749994060 container remove 5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 07:38:12 np0005536586 systemd[1]: libpod-conmon-5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682.scope: Deactivated successfully.
Nov 26 07:38:12 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 26 07:38:12 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 26 07:38:12 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:38:12 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:38:12 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 26 07:38:12 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 26 07:38:12 np0005536586 podman[87951]: 2025-11-26 12:38:12.87395128 +0000 UTC m=+0.028551942 container create a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brahmagupta, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 07:38:12 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 26 07:38:12 np0005536586 systemd[1]: Started libpod-conmon-a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe.scope.
Nov 26 07:38:12 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:12 np0005536586 podman[87951]: 2025-11-26 12:38:12.914816737 +0000 UTC m=+0.069417400 container init a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 07:38:12 np0005536586 podman[87951]: 2025-11-26 12:38:12.918914651 +0000 UTC m=+0.073515314 container start a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brahmagupta, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:12 np0005536586 podman[87951]: 2025-11-26 12:38:12.919854759 +0000 UTC m=+0.074455422 container attach a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brahmagupta, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:12 np0005536586 distracted_brahmagupta[87964]: 167 167
Nov 26 07:38:12 np0005536586 systemd[1]: libpod-a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe.scope: Deactivated successfully.
Nov 26 07:38:12 np0005536586 conmon[87964]: conmon a78b6e67184336dc778b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe.scope/container/memory.events
Nov 26 07:38:12 np0005536586 podman[87951]: 2025-11-26 12:38:12.92231852 +0000 UTC m=+0.076919184 container died a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:12 np0005536586 systemd[1]: var-lib-containers-storage-overlay-9866a1db4e48070e20337cc47024546cf0fb860e65ab7b2ea97c9f468b556f6a-merged.mount: Deactivated successfully.
Nov 26 07:38:12 np0005536586 podman[87951]: 2025-11-26 12:38:12.938357125 +0000 UTC m=+0.092957789 container remove a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:12 np0005536586 podman[87951]: 2025-11-26 12:38:12.863338049 +0000 UTC m=+0.017938732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:12 np0005536586 systemd[1]: libpod-conmon-a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe.scope: Deactivated successfully.
Nov 26 07:38:13 np0005536586 podman[87994]: 2025-11-26 12:38:13.105012861 +0000 UTC m=+0.023585024 container create 10766f61f2498c88ad44a01d39e3f2435265a15c673c74608fac73f4009a5bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:13 np0005536586 systemd[1]: Started libpod-conmon-10766f61f2498c88ad44a01d39e3f2435265a15c673c74608fac73f4009a5bc4.scope.
Nov 26 07:38:13 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:13 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee5b22687b3c8cee60178554bbed41f4d8ff2f79a5498254b9087800b3dd969f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:13 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee5b22687b3c8cee60178554bbed41f4d8ff2f79a5498254b9087800b3dd969f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:13 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee5b22687b3c8cee60178554bbed41f4d8ff2f79a5498254b9087800b3dd969f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:13 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee5b22687b3c8cee60178554bbed41f4d8ff2f79a5498254b9087800b3dd969f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:13 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee5b22687b3c8cee60178554bbed41f4d8ff2f79a5498254b9087800b3dd969f/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:13 np0005536586 podman[87994]: 2025-11-26 12:38:13.15407138 +0000 UTC m=+0.072643563 container init 10766f61f2498c88ad44a01d39e3f2435265a15c673c74608fac73f4009a5bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 07:38:13 np0005536586 podman[87994]: 2025-11-26 12:38:13.160401958 +0000 UTC m=+0.078974121 container start 10766f61f2498c88ad44a01d39e3f2435265a15c673c74608fac73f4009a5bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate-test, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:13 np0005536586 podman[87994]: 2025-11-26 12:38:13.164210103 +0000 UTC m=+0.082782266 container attach 10766f61f2498c88ad44a01d39e3f2435265a15c673c74608fac73f4009a5bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate-test, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 07:38:13 np0005536586 podman[87994]: 2025-11-26 12:38:13.095960043 +0000 UTC m=+0.014532225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:13 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate-test[88007]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 26 07:38:13 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate-test[88007]:                            [--no-systemd] [--no-tmpfs]
Nov 26 07:38:13 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate-test[88007]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 26 07:38:13 np0005536586 systemd[1]: libpod-10766f61f2498c88ad44a01d39e3f2435265a15c673c74608fac73f4009a5bc4.scope: Deactivated successfully.
Nov 26 07:38:13 np0005536586 podman[87994]: 2025-11-26 12:38:13.708491401 +0000 UTC m=+0.627063563 container died 10766f61f2498c88ad44a01d39e3f2435265a15c673c74608fac73f4009a5bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 07:38:13 np0005536586 systemd[1]: var-lib-containers-storage-overlay-ee5b22687b3c8cee60178554bbed41f4d8ff2f79a5498254b9087800b3dd969f-merged.mount: Deactivated successfully.
Nov 26 07:38:13 np0005536586 podman[87994]: 2025-11-26 12:38:13.73707968 +0000 UTC m=+0.655651843 container remove 10766f61f2498c88ad44a01d39e3f2435265a15c673c74608fac73f4009a5bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 07:38:13 np0005536586 systemd[1]: libpod-conmon-10766f61f2498c88ad44a01d39e3f2435265a15c673c74608fac73f4009a5bc4.scope: Deactivated successfully.
Nov 26 07:38:13 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 07:38:13 np0005536586 systemd[1]: Reloading.
Nov 26 07:38:13 np0005536586 ceph-mon[74966]: Deploying daemon osd.0 on compute-0
Nov 26 07:38:13 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:38:13 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:38:14 np0005536586 systemd[1]: Reloading.
Nov 26 07:38:14 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:38:14 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:38:14 np0005536586 systemd[1]: Starting Ceph osd.0 for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 07:38:14 np0005536586 podman[88156]: 2025-11-26 12:38:14.473794666 +0000 UTC m=+0.026586294 container create bb8f0d0586e1ce22a0714dd07f57f56957c2a354b5ba5ec890a7efff70662a46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:14 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:14 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc484ce1a77669a8aedc043453118501af41931903eb548e966d95a679f3bcd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:14 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc484ce1a77669a8aedc043453118501af41931903eb548e966d95a679f3bcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:14 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc484ce1a77669a8aedc043453118501af41931903eb548e966d95a679f3bcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:14 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc484ce1a77669a8aedc043453118501af41931903eb548e966d95a679f3bcd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:14 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc484ce1a77669a8aedc043453118501af41931903eb548e966d95a679f3bcd/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:14 np0005536586 podman[88156]: 2025-11-26 12:38:14.525502798 +0000 UTC m=+0.078294426 container init bb8f0d0586e1ce22a0714dd07f57f56957c2a354b5ba5ec890a7efff70662a46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:14 np0005536586 podman[88156]: 2025-11-26 12:38:14.530171531 +0000 UTC m=+0.082963159 container start bb8f0d0586e1ce22a0714dd07f57f56957c2a354b5ba5ec890a7efff70662a46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:14 np0005536586 podman[88156]: 2025-11-26 12:38:14.531304635 +0000 UTC m=+0.084096263 container attach bb8f0d0586e1ce22a0714dd07f57f56957c2a354b5ba5ec890a7efff70662a46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 26 07:38:14 np0005536586 podman[88156]: 2025-11-26 12:38:14.463036261 +0000 UTC m=+0.015827909 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:15 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate[88168]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 07:38:15 np0005536586 bash[88156]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 07:38:15 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate[88168]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 26 07:38:15 np0005536586 bash[88156]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 26 07:38:15 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate[88168]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 26 07:38:15 np0005536586 bash[88156]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 26 07:38:15 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate[88168]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 26 07:38:15 np0005536586 bash[88156]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 26 07:38:15 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate[88168]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 26 07:38:15 np0005536586 bash[88156]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 26 07:38:15 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate[88168]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 07:38:15 np0005536586 bash[88156]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 07:38:15 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate[88168]: --> ceph-volume raw activate successful for osd ID: 0
Nov 26 07:38:15 np0005536586 bash[88156]: --> ceph-volume raw activate successful for osd ID: 0
Nov 26 07:38:15 np0005536586 systemd[1]: libpod-bb8f0d0586e1ce22a0714dd07f57f56957c2a354b5ba5ec890a7efff70662a46.scope: Deactivated successfully.
Nov 26 07:38:15 np0005536586 conmon[88168]: conmon bb8f0d0586e1ce22a071 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb8f0d0586e1ce22a0714dd07f57f56957c2a354b5ba5ec890a7efff70662a46.scope/container/memory.events
Nov 26 07:38:15 np0005536586 podman[88156]: 2025-11-26 12:38:15.347483197 +0000 UTC m=+0.900274835 container died bb8f0d0586e1ce22a0714dd07f57f56957c2a354b5ba5ec890a7efff70662a46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:15 np0005536586 systemd[1]: var-lib-containers-storage-overlay-abc484ce1a77669a8aedc043453118501af41931903eb548e966d95a679f3bcd-merged.mount: Deactivated successfully.
Nov 26 07:38:15 np0005536586 podman[88156]: 2025-11-26 12:38:15.377632961 +0000 UTC m=+0.930424589 container remove bb8f0d0586e1ce22a0714dd07f57f56957c2a354b5ba5ec890a7efff70662a46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 07:38:15 np0005536586 podman[88346]: 2025-11-26 12:38:15.51272131 +0000 UTC m=+0.026771323 container create 9981961b79970f3203da5890b61d540db16b3fc16ea1d2c76344e2daf1f706a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 07:38:15 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4814c27208e07552f0380e993872782cc0314310d0b35e7daf079b1bc64c999/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:15 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4814c27208e07552f0380e993872782cc0314310d0b35e7daf079b1bc64c999/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:15 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4814c27208e07552f0380e993872782cc0314310d0b35e7daf079b1bc64c999/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:15 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4814c27208e07552f0380e993872782cc0314310d0b35e7daf079b1bc64c999/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:15 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4814c27208e07552f0380e993872782cc0314310d0b35e7daf079b1bc64c999/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:15 np0005536586 podman[88346]: 2025-11-26 12:38:15.5556547 +0000 UTC m=+0.069704723 container init 9981961b79970f3203da5890b61d540db16b3fc16ea1d2c76344e2daf1f706a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 07:38:15 np0005536586 podman[88346]: 2025-11-26 12:38:15.561213217 +0000 UTC m=+0.075263229 container start 9981961b79970f3203da5890b61d540db16b3fc16ea1d2c76344e2daf1f706a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 07:38:15 np0005536586 bash[88346]: 9981961b79970f3203da5890b61d540db16b3fc16ea1d2c76344e2daf1f706a9
Nov 26 07:38:15 np0005536586 podman[88346]: 2025-11-26 12:38:15.501490501 +0000 UTC m=+0.015540534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:15 np0005536586 systemd[1]: Started Ceph osd.0 for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 07:38:15 np0005536586 ceph-osd[88362]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 07:38:15 np0005536586 ceph-osd[88362]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 26 07:38:15 np0005536586 ceph-osd[88362]: pidfile_write: ignore empty --pid-file
Nov 26 07:38:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:38:15 np0005536586 ceph-osd[88362]: bdev(0x56032f683800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 07:38:15 np0005536586 ceph-osd[88362]: bdev(0x56032f683800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 07:38:15 np0005536586 ceph-osd[88362]: bdev(0x56032f683800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:15 np0005536586 ceph-osd[88362]: bdev(0x56032f683800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:15 np0005536586 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 07:38:15 np0005536586 ceph-osd[88362]: bdev(0x5603304bb800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 07:38:15 np0005536586 ceph-osd[88362]: bdev(0x5603304bb800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 07:38:15 np0005536586 ceph-osd[88362]: bdev(0x5603304bb800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:15 np0005536586 ceph-osd[88362]: bdev(0x5603304bb800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:15 np0005536586 ceph-osd[88362]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 26 07:38:15 np0005536586 ceph-osd[88362]: bdev(0x5603304bb800 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 07:38:15 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:38:15 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 26 07:38:15 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 26 07:38:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:38:15 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:38:15 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 26 07:38:15 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 26 07:38:15 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 07:38:15 np0005536586 ceph-osd[88362]: bdev(0x56032f683800 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 07:38:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:38:15 np0005536586 podman[88511]: 2025-11-26 12:38:15.99661319 +0000 UTC m=+0.026452920 container create 798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:16 np0005536586 systemd[1]: Started libpod-conmon-798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1.scope.
Nov 26 07:38:16 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:16 np0005536586 podman[88511]: 2025-11-26 12:38:16.048875069 +0000 UTC m=+0.078714820 container init 798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_clarke, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 07:38:16 np0005536586 podman[88511]: 2025-11-26 12:38:16.053371077 +0000 UTC m=+0.083210807 container start 798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_clarke, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 07:38:16 np0005536586 podman[88511]: 2025-11-26 12:38:16.05528656 +0000 UTC m=+0.085126311 container attach 798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_clarke, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 07:38:16 np0005536586 festive_clarke[88524]: 167 167
Nov 26 07:38:16 np0005536586 systemd[1]: libpod-798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1.scope: Deactivated successfully.
Nov 26 07:38:16 np0005536586 conmon[88524]: conmon 798c0b5121c8882ec8ec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1.scope/container/memory.events
Nov 26 07:38:16 np0005536586 podman[88511]: 2025-11-26 12:38:16.057613333 +0000 UTC m=+0.087453062 container died 798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 07:38:16 np0005536586 systemd[1]: var-lib-containers-storage-overlay-1598ea706c603887ec5623530f212330fa635c1a6aba781d7eaabf95532e9958-merged.mount: Deactivated successfully.
Nov 26 07:38:16 np0005536586 podman[88511]: 2025-11-26 12:38:16.0741019 +0000 UTC m=+0.103941630 container remove 798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_clarke, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:16 np0005536586 podman[88511]: 2025-11-26 12:38:15.98595288 +0000 UTC m=+0.015792620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:16 np0005536586 systemd[1]: libpod-conmon-798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1.scope: Deactivated successfully.
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: load: jerasure load: lrc 
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 07:38:16 np0005536586 podman[88565]: 2025-11-26 12:38:16.252904777 +0000 UTC m=+0.026776362 container create 28e5fffa5e95aa111baf674d5358bc2ead2950af47f54348f63fa0b427bd2d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate-test, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 26 07:38:16 np0005536586 systemd[1]: Started libpod-conmon-28e5fffa5e95aa111baf674d5358bc2ead2950af47f54348f63fa0b427bd2d4a.scope.
Nov 26 07:38:16 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:16 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df436fe03170fa9941245e25b02eb7f52d93b0dc4d4e155327274b09324c898/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:16 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df436fe03170fa9941245e25b02eb7f52d93b0dc4d4e155327274b09324c898/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:16 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df436fe03170fa9941245e25b02eb7f52d93b0dc4d4e155327274b09324c898/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:16 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df436fe03170fa9941245e25b02eb7f52d93b0dc4d4e155327274b09324c898/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:16 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df436fe03170fa9941245e25b02eb7f52d93b0dc4d4e155327274b09324c898/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:16 np0005536586 podman[88565]: 2025-11-26 12:38:16.310874237 +0000 UTC m=+0.084745821 container init 28e5fffa5e95aa111baf674d5358bc2ead2950af47f54348f63fa0b427bd2d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate-test, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 26 07:38:16 np0005536586 podman[88565]: 2025-11-26 12:38:16.317074467 +0000 UTC m=+0.090946051 container start 28e5fffa5e95aa111baf674d5358bc2ead2950af47f54348f63fa0b427bd2d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate-test, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:16 np0005536586 podman[88565]: 2025-11-26 12:38:16.31865571 +0000 UTC m=+0.092527293 container attach 28e5fffa5e95aa111baf674d5358bc2ead2950af47f54348f63fa0b427bd2d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:16 np0005536586 podman[88565]: 2025-11-26 12:38:16.241969287 +0000 UTC m=+0.015840891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053d400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053d400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053d400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053d400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluefs mount
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluefs mount shared_bdev_used = 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: RocksDB version: 7.9.2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Git sha 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: DB SUMMARY
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: DB Session ID:  OP18G8N8BK0JDZ3FFAWB
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: CURRENT file:  CURRENT
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                         Options.error_if_exists: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.create_if_missing: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                                     Options.env: 0x56033050dc70
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                                Options.info_log: 0x56032f70a8a0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                              Options.statistics: (nil)
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.use_fsync: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                              Options.db_log_dir: 
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.write_buffer_manager: 0x560330616460
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.unordered_write: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.row_cache: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                              Options.wal_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.two_write_queues: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.wal_compression: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.atomic_flush: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.max_background_jobs: 4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.max_background_compactions: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.max_subcompactions: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.max_open_files: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Compression algorithms supported:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: #011kZSTD supported: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: #011kXpressCompression supported: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: #011kBZip2Compression supported: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: #011kLZ4Compression supported: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: #011kZlibCompression supported: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: #011kSnappyCompression supported: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1561b018-6fdf-4e5d-94af-e3c267a92376
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160696447686, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160696447976, "job": 1, "event": "recovery_finished"}
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: freelist init
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: freelist _read_cfg
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluefs umount
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053d400 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 07:38:16 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:16 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:16 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 26 07:38:16 np0005536586 ceph-mon[74966]: Deploying daemon osd.1 on compute-0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053d400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053d400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053d400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bdev(0x56033053d400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluefs mount
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluefs mount shared_bdev_used = 4718592
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: RocksDB version: 7.9.2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Git sha 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: DB SUMMARY
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: DB Session ID:  OP18G8N8BK0JDZ3FFAWA
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: CURRENT file:  CURRENT
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                         Options.error_if_exists: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.create_if_missing: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                                     Options.env: 0x56032f85f8f0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                                Options.info_log: 0x5603305096c0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                              Options.statistics: (nil)
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.use_fsync: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                              Options.db_log_dir: 
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.write_buffer_manager: 0x5603306166e0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.unordered_write: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.row_cache: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                              Options.wal_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.two_write_queues: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.wal_compression: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.atomic_flush: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.max_background_jobs: 4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.max_background_compactions: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.max_subcompactions: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.max_open_files: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Compression algorithms supported:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: #011kZSTD supported: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: #011kXpressCompression supported: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: #011kBZip2Compression supported: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: #011kLZ4Compression supported: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: #011kZlibCompression supported: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: #011kSnappyCompression supported: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f701060)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f701060)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f701060)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f701060)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f701060)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f701060)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f701060)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560330509460)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560330509460)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560330509460)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56032f6f7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1561b018-6fdf-4e5d-94af-e3c267a92376
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160696727818, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160696730517, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160696, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1561b018-6fdf-4e5d-94af-e3c267a92376", "db_session_id": "OP18G8N8BK0JDZ3FFAWA", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160696731502, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160696, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1561b018-6fdf-4e5d-94af-e3c267a92376", "db_session_id": "OP18G8N8BK0JDZ3FFAWA", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160696732350, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160696, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1561b018-6fdf-4e5d-94af-e3c267a92376", "db_session_id": "OP18G8N8BK0JDZ3FFAWA", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160696732839, "job": 1, "event": "recovery_finished"}
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5603306ddc00
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: DB pointer 0x5603305ffa00
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: _get_class not permitted to load lua
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: _get_class not permitted to load sdk
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: _get_class not permitted to load test_remote_reads
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: osd.0 0 load_pgs
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: osd.0 0 load_pgs opened 0 pgs
Nov 26 07:38:16 np0005536586 ceph-osd[88362]: osd.0 0 log_to_monitors true
Nov 26 07:38:16 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0[88358]: 2025-11-26T12:38:16.751+0000 7fe03c03f740 -1 osd.0 0 log_to_monitors true
Nov 26 07:38:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 26 07:38:16 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 26 07:38:16 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate-test[88578]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 26 07:38:16 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate-test[88578]:                            [--no-systemd] [--no-tmpfs]
Nov 26 07:38:16 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate-test[88578]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 26 07:38:16 np0005536586 systemd[1]: libpod-28e5fffa5e95aa111baf674d5358bc2ead2950af47f54348f63fa0b427bd2d4a.scope: Deactivated successfully.
Nov 26 07:38:16 np0005536586 podman[88565]: 2025-11-26 12:38:16.876866131 +0000 UTC m=+0.650737715 container died 28e5fffa5e95aa111baf674d5358bc2ead2950af47f54348f63fa0b427bd2d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate-test, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:16 np0005536586 systemd[1]: var-lib-containers-storage-overlay-5df436fe03170fa9941245e25b02eb7f52d93b0dc4d4e155327274b09324c898-merged.mount: Deactivated successfully.
Nov 26 07:38:16 np0005536586 podman[88565]: 2025-11-26 12:38:16.90800696 +0000 UTC m=+0.681878544 container remove 28e5fffa5e95aa111baf674d5358bc2ead2950af47f54348f63fa0b427bd2d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate-test, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 07:38:16 np0005536586 systemd[1]: libpod-conmon-28e5fffa5e95aa111baf674d5358bc2ead2950af47f54348f63fa0b427bd2d4a.scope: Deactivated successfully.
Nov 26 07:38:17 np0005536586 systemd[1]: Reloading.
Nov 26 07:38:17 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:38:17 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:38:17 np0005536586 systemd[1]: Reloading.
Nov 26 07:38:17 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:38:17 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:38:17 np0005536586 systemd[1]: Starting Ceph osd.1 for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 07:38:17 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 26 07:38:17 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 07:38:17 np0005536586 ceph-mon[74966]: from='osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 26 07:38:17 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 26 07:38:17 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 26 07:38:17 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 26 07:38:17 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 26 07:38:17 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 07:38:17 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 26 07:38:17 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 07:38:17 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 07:38:17 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 07:38:17 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 07:38:17 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 07:38:17 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 07:38:17 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 07:38:17 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 07:38:17 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 07:38:17 np0005536586 podman[89137]: 2025-11-26 12:38:17.666892643 +0000 UTC m=+0.028179628 container create f6d158c6c3276fef02e1722d21ca127ccc1a47f8a177e675bb59766e32931aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:17 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721d01acc8a88be545f5cf2a7d41e361564830574e2dac0d477c545cd1ea377/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721d01acc8a88be545f5cf2a7d41e361564830574e2dac0d477c545cd1ea377/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721d01acc8a88be545f5cf2a7d41e361564830574e2dac0d477c545cd1ea377/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721d01acc8a88be545f5cf2a7d41e361564830574e2dac0d477c545cd1ea377/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721d01acc8a88be545f5cf2a7d41e361564830574e2dac0d477c545cd1ea377/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:17 np0005536586 podman[89137]: 2025-11-26 12:38:17.710942947 +0000 UTC m=+0.072229951 container init f6d158c6c3276fef02e1722d21ca127ccc1a47f8a177e675bb59766e32931aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 07:38:17 np0005536586 podman[89137]: 2025-11-26 12:38:17.715897981 +0000 UTC m=+0.077184966 container start f6d158c6c3276fef02e1722d21ca127ccc1a47f8a177e675bb59766e32931aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 07:38:17 np0005536586 podman[89137]: 2025-11-26 12:38:17.717342514 +0000 UTC m=+0.078629499 container attach f6d158c6c3276fef02e1722d21ca127ccc1a47f8a177e675bb59766e32931aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 07:38:17 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 26 07:38:17 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 26 07:38:17 np0005536586 podman[89137]: 2025-11-26 12:38:17.655524284 +0000 UTC m=+0.016811279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:17 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 07:38:18 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate[89149]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 07:38:18 np0005536586 bash[89137]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 07:38:18 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate[89149]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 26 07:38:18 np0005536586 bash[89137]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 26 07:38:18 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate[89149]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 26 07:38:18 np0005536586 bash[89137]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 26 07:38:18 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate[89149]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 26 07:38:18 np0005536586 bash[89137]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 26 07:38:18 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate[89149]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 26 07:38:18 np0005536586 bash[89137]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 26 07:38:18 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate[89149]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 07:38:18 np0005536586 bash[89137]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 07:38:18 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate[89149]: --> ceph-volume raw activate successful for osd ID: 1
Nov 26 07:38:18 np0005536586 bash[89137]: --> ceph-volume raw activate successful for osd ID: 1
Nov 26 07:38:18 np0005536586 systemd[1]: libpod-f6d158c6c3276fef02e1722d21ca127ccc1a47f8a177e675bb59766e32931aba.scope: Deactivated successfully.
Nov 26 07:38:18 np0005536586 podman[89137]: 2025-11-26 12:38:18.519315448 +0000 UTC m=+0.880602434 container died f6d158c6c3276fef02e1722d21ca127ccc1a47f8a177e675bb59766e32931aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:18 np0005536586 systemd[1]: var-lib-containers-storage-overlay-1721d01acc8a88be545f5cf2a7d41e361564830574e2dac0d477c545cd1ea377-merged.mount: Deactivated successfully.
Nov 26 07:38:18 np0005536586 podman[89137]: 2025-11-26 12:38:18.552345002 +0000 UTC m=+0.913631986 container remove f6d158c6c3276fef02e1722d21ca127ccc1a47f8a177e675bb59766e32931aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 26 07:38:18 np0005536586 ceph-osd[88362]: osd.0 0 done with init, starting boot process
Nov 26 07:38:18 np0005536586 ceph-osd[88362]: osd.0 0 start_boot
Nov 26 07:38:18 np0005536586 ceph-osd[88362]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 26 07:38:18 np0005536586 ceph-osd[88362]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 26 07:38:18 np0005536586 ceph-osd[88362]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 26 07:38:18 np0005536586 ceph-osd[88362]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 26 07:38:18 np0005536586 ceph-osd[88362]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 07:38:18 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 07:38:18 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: from='osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: from='osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 07:38:18 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 07:38:18 np0005536586 ceph-mgr[75236]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1795865798; not ready for session (expect reconnect)
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 07:38:18 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 07:38:18 np0005536586 podman[89311]: 2025-11-26 12:38:18.723191045 +0000 UTC m=+0.048375097 container create 7fe95a8b384c5c68314b5460611d9d4e1d6cc687c822707047def746a6bd8d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:18 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb013a56b0becbef7b3b36d69426bb0f46bbe5876c097b98e39268709ddf439f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:18 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb013a56b0becbef7b3b36d69426bb0f46bbe5876c097b98e39268709ddf439f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:18 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb013a56b0becbef7b3b36d69426bb0f46bbe5876c097b98e39268709ddf439f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:18 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb013a56b0becbef7b3b36d69426bb0f46bbe5876c097b98e39268709ddf439f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:18 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb013a56b0becbef7b3b36d69426bb0f46bbe5876c097b98e39268709ddf439f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:18 np0005536586 podman[89311]: 2025-11-26 12:38:18.692159883 +0000 UTC m=+0.017343956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:18 np0005536586 podman[89311]: 2025-11-26 12:38:18.828425029 +0000 UTC m=+0.153609102 container init 7fe95a8b384c5c68314b5460611d9d4e1d6cc687c822707047def746a6bd8d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 07:38:18 np0005536586 podman[89311]: 2025-11-26 12:38:18.833386706 +0000 UTC m=+0.158570760 container start 7fe95a8b384c5c68314b5460611d9d4e1d6cc687c822707047def746a6bd8d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 26 07:38:18 np0005536586 bash[89311]: 7fe95a8b384c5c68314b5460611d9d4e1d6cc687c822707047def746a6bd8d18
Nov 26 07:38:18 np0005536586 systemd[1]: Started Ceph osd.1 for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 07:38:18 np0005536586 ceph-osd[89328]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 07:38:18 np0005536586 ceph-osd[89328]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 26 07:38:18 np0005536586 ceph-osd[89328]: pidfile_write: ignore empty --pid-file
Nov 26 07:38:18 np0005536586 ceph-osd[89328]: bdev(0x561fc2f8b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 07:38:18 np0005536586 ceph-osd[89328]: bdev(0x561fc2f8b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 07:38:18 np0005536586 ceph-osd[89328]: bdev(0x561fc2f8b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:18 np0005536586 ceph-osd[89328]: bdev(0x561fc2f8b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:18 np0005536586 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 07:38:18 np0005536586 ceph-osd[89328]: bdev(0x561fc3dc3800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 07:38:18 np0005536586 ceph-osd[89328]: bdev(0x561fc3dc3800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 07:38:18 np0005536586 ceph-osd[89328]: bdev(0x561fc3dc3800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:18 np0005536586 ceph-osd[89328]: bdev(0x561fc3dc3800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:18 np0005536586 ceph-osd[89328]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 26 07:38:18 np0005536586 ceph-osd[89328]: bdev(0x561fc3dc3800 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:38:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:38:18 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 26 07:38:18 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc2f8b800 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 07:38:19 np0005536586 podman[89475]: 2025-11-26 12:38:19.371451853 +0000 UTC m=+0.027940725 container create 978bf3dd0f0d2243688334e7689661a257a92d403054524851e764f0d22ac53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: load: jerasure load: lrc 
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 07:38:19 np0005536586 systemd[1]: Started libpod-conmon-978bf3dd0f0d2243688334e7689661a257a92d403054524851e764f0d22ac53e.scope.
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 07:38:19 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:19 np0005536586 podman[89475]: 2025-11-26 12:38:19.435379625 +0000 UTC m=+0.091868517 container init 978bf3dd0f0d2243688334e7689661a257a92d403054524851e764f0d22ac53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 07:38:19 np0005536586 podman[89475]: 2025-11-26 12:38:19.444952277 +0000 UTC m=+0.101441149 container start 978bf3dd0f0d2243688334e7689661a257a92d403054524851e764f0d22ac53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 07:38:19 np0005536586 jolly_kare[89497]: 167 167
Nov 26 07:38:19 np0005536586 systemd[1]: libpod-978bf3dd0f0d2243688334e7689661a257a92d403054524851e764f0d22ac53e.scope: Deactivated successfully.
Nov 26 07:38:19 np0005536586 podman[89475]: 2025-11-26 12:38:19.450786515 +0000 UTC m=+0.107275407 container attach 978bf3dd0f0d2243688334e7689661a257a92d403054524851e764f0d22ac53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:19 np0005536586 podman[89475]: 2025-11-26 12:38:19.451023854 +0000 UTC m=+0.107512736 container died 978bf3dd0f0d2243688334e7689661a257a92d403054524851e764f0d22ac53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:19 np0005536586 podman[89475]: 2025-11-26 12:38:19.361197622 +0000 UTC m=+0.017686513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:19 np0005536586 systemd[1]: var-lib-containers-storage-overlay-4c54553e1786bf140c8881262960a447e4d42049897bedb4a49a8062a4980d0e-merged.mount: Deactivated successfully.
Nov 26 07:38:19 np0005536586 podman[89475]: 2025-11-26 12:38:19.479944883 +0000 UTC m=+0.136433755 container remove 978bf3dd0f0d2243688334e7689661a257a92d403054524851e764f0d22ac53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 07:38:19 np0005536586 systemd[1]: libpod-conmon-978bf3dd0f0d2243688334e7689661a257a92d403054524851e764f0d22ac53e.scope: Deactivated successfully.
Nov 26 07:38:19 np0005536586 ceph-mgr[75236]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1795865798; not ready for session (expect reconnect)
Nov 26 07:38:19 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 07:38:19 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 07:38:19 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 07:38:19 np0005536586 ceph-mon[74966]: from='osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 07:38:19 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:19 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:19 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 26 07:38:19 np0005536586 ceph-mon[74966]: Deploying daemon osd.2 on compute-0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e45400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e45400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e45400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e45400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluefs mount
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluefs mount shared_bdev_used = 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: RocksDB version: 7.9.2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Git sha 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: DB SUMMARY
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: DB Session ID:  L4FCZBK85MEUFPLLH3BU
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: CURRENT file:  CURRENT
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                         Options.error_if_exists: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.create_if_missing: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                                     Options.env: 0x561fc3e15c70
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                                Options.info_log: 0x561fc30128a0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                              Options.statistics: (nil)
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.use_fsync: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                              Options.db_log_dir: 
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.write_buffer_manager: 0x561fc3f1e460
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.unordered_write: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.row_cache: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                              Options.wal_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.two_write_queues: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.wal_compression: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.atomic_flush: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.max_background_jobs: 4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.max_background_compactions: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.max_subcompactions: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.max_open_files: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Compression algorithms supported:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: #011kZSTD supported: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: #011kXpressCompression supported: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: #011kBZip2Compression supported: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: #011kLZ4Compression supported: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: #011kZlibCompression supported: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: #011kSnappyCompression supported: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc30122c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc30122c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc30122c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc30122c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc30122c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc30122c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc30122c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 93dfa10c-7ad9-4a11-b11e-e56de0349760
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160699692120, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160699692294, "job": 1, "event": "recovery_finished"}
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 26 07:38:19 np0005536586 podman[89527]: 2025-11-26 12:38:19.692692094 +0000 UTC m=+0.040977701 container create e346d03118efaa5b01ec9717d854b43b52bf90b1bd3c52988e6f31962ceea768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: freelist init
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: freelist _read_cfg
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluefs umount
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e45400 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 07:38:19 np0005536586 systemd[1]: Started libpod-conmon-e346d03118efaa5b01ec9717d854b43b52bf90b1bd3c52988e6f31962ceea768.scope.
Nov 26 07:38:19 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:19 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6162960d2532a1676d6a549216960ccf415656970f46f9aa47b1555004990cee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:19 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6162960d2532a1676d6a549216960ccf415656970f46f9aa47b1555004990cee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:19 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6162960d2532a1676d6a549216960ccf415656970f46f9aa47b1555004990cee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:19 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6162960d2532a1676d6a549216960ccf415656970f46f9aa47b1555004990cee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:19 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6162960d2532a1676d6a549216960ccf415656970f46f9aa47b1555004990cee/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:19 np0005536586 podman[89527]: 2025-11-26 12:38:19.756531249 +0000 UTC m=+0.104816866 container init e346d03118efaa5b01ec9717d854b43b52bf90b1bd3c52988e6f31962ceea768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:19 np0005536586 podman[89527]: 2025-11-26 12:38:19.762040813 +0000 UTC m=+0.110326430 container start e346d03118efaa5b01ec9717d854b43b52bf90b1bd3c52988e6f31962ceea768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:19 np0005536586 podman[89527]: 2025-11-26 12:38:19.767790942 +0000 UTC m=+0.116076560 container attach e346d03118efaa5b01ec9717d854b43b52bf90b1bd3c52988e6f31962ceea768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 07:38:19 np0005536586 podman[89527]: 2025-11-26 12:38:19.675704183 +0000 UTC m=+0.023989819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:19 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e45400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e45400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e45400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bdev(0x561fc3e45400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluefs mount
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluefs mount shared_bdev_used = 4718592
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: RocksDB version: 7.9.2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Git sha 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: DB SUMMARY
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: DB Session ID:  L4FCZBK85MEUFPLLH3BV
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: CURRENT file:  CURRENT
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                         Options.error_if_exists: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.create_if_missing: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                                     Options.env: 0x561fc3fc6460
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                                Options.info_log: 0x561fc3012620
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                              Options.statistics: (nil)
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.use_fsync: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                              Options.db_log_dir: 
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.write_buffer_manager: 0x561fc3f1e460
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.unordered_write: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.row_cache: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                              Options.wal_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.two_write_queues: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.wal_compression: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.atomic_flush: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.max_background_jobs: 4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.max_background_compactions: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.max_subcompactions: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.max_open_files: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Compression algorithms supported:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: #011kZSTD supported: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: #011kXpressCompression supported: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: #011kBZip2Compression supported: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: #011kLZ4Compression supported: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: #011kZlibCompression supported: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: #011kSnappyCompression supported: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561fc2fff090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 93dfa10c-7ad9-4a11-b11e-e56de0349760
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160699994970, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 07:38:19 np0005536586 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160700037414, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160699, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93dfa10c-7ad9-4a11-b11e-e56de0349760", "db_session_id": "L4FCZBK85MEUFPLLH3BV", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160700038882, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160700, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93dfa10c-7ad9-4a11-b11e-e56de0349760", "db_session_id": "L4FCZBK85MEUFPLLH3BV", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160700043408, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160700, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93dfa10c-7ad9-4a11-b11e-e56de0349760", "db_session_id": "L4FCZBK85MEUFPLLH3BV", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160700044084, "job": 1, "event": "recovery_finished"}
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561fc316c000
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: rocksdb: DB pointer 0x561fc3f07a00
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: _get_class not permitted to load lua
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: _get_class not permitted to load sdk
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: _get_class not permitted to load test_remote_reads
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: osd.1 0 load_pgs
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: osd.1 0 load_pgs opened 0 pgs
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: osd.1 0 log_to_monitors true
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 07:38:20 np0005536586 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.042       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.042       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.042       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.042       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Bloc
Nov 26 07:38:20 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1[89324]: 2025-11-26T12:38:20.070+0000 7f9eb500d740 -1 osd.1 0 log_to_monitors true
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 26 07:38:20 np0005536586 ceph-osd[88362]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 76.582 iops: 19604.878 elapsed_sec: 0.153
Nov 26 07:38:20 np0005536586 ceph-osd[88362]: log_channel(cluster) log [WRN] : OSD bench result of 19604.877803 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 07:38:20 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0[88358]: 2025-11-26T12:38:20.288+0000 7fe037fbf640 -1 osd.0 0 waiting for initial osdmap
Nov 26 07:38:20 np0005536586 ceph-osd[88362]: osd.0 0 waiting for initial osdmap
Nov 26 07:38:20 np0005536586 ceph-osd[88362]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 26 07:38:20 np0005536586 ceph-osd[88362]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 26 07:38:20 np0005536586 ceph-osd[88362]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 26 07:38:20 np0005536586 ceph-osd[88362]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Nov 26 07:38:20 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0[88358]: 2025-11-26T12:38:20.301+0000 7fe0335e7640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 07:38:20 np0005536586 ceph-osd[88362]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 07:38:20 np0005536586 ceph-osd[88362]: osd.0 8 set_numa_affinity not setting numa affinity
Nov 26 07:38:20 np0005536586 ceph-osd[88362]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 26 07:38:20 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate-test[89735]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 26 07:38:20 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate-test[89735]:                            [--no-systemd] [--no-tmpfs]
Nov 26 07:38:20 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate-test[89735]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 26 07:38:20 np0005536586 podman[89527]: 2025-11-26 12:38:20.324166313 +0000 UTC m=+0.672451940 container died e346d03118efaa5b01ec9717d854b43b52bf90b1bd3c52988e6f31962ceea768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:20 np0005536586 systemd[1]: libpod-e346d03118efaa5b01ec9717d854b43b52bf90b1bd3c52988e6f31962ceea768.scope: Deactivated successfully.
Nov 26 07:38:20 np0005536586 systemd[1]: var-lib-containers-storage-overlay-6162960d2532a1676d6a549216960ccf415656970f46f9aa47b1555004990cee-merged.mount: Deactivated successfully.
Nov 26 07:38:20 np0005536586 podman[89527]: 2025-11-26 12:38:20.35863639 +0000 UTC m=+0.706922007 container remove e346d03118efaa5b01ec9717d854b43b52bf90b1bd3c52988e6f31962ceea768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate-test, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:20 np0005536586 systemd[1]: libpod-conmon-e346d03118efaa5b01ec9717d854b43b52bf90b1bd3c52988e6f31962ceea768.scope: Deactivated successfully.
Nov 26 07:38:20 np0005536586 systemd[1]: Reloading.
Nov 26 07:38:20 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:38:20 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:38:20 np0005536586 ceph-mgr[75236]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1795865798; not ready for session (expect reconnect)
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 07:38:20 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: from='osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798] boot
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 07:38:20 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 07:38:20 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 07:38:20 np0005536586 ceph-osd[88362]: osd.0 9 state: booting -> active
Nov 26 07:38:20 np0005536586 systemd[1]: Reloading.
Nov 26 07:38:20 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:38:20 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:38:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:38:20 np0005536586 systemd[1]: Starting Ceph osd.2 for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 07:38:21 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 26 07:38:21 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 26 07:38:21 np0005536586 podman[90099]: 2025-11-26 12:38:21.123591707 +0000 UTC m=+0.027424749 container create 75a2026209097baee10bdb84dd454a393aa8cdc3408d582a3a44c49528b2c5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:21 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:21 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7200fb2fbea5ecccb0a33f872e2aad9936bcf9333567edaf4729ca80d239e0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:21 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7200fb2fbea5ecccb0a33f872e2aad9936bcf9333567edaf4729ca80d239e0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:21 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7200fb2fbea5ecccb0a33f872e2aad9936bcf9333567edaf4729ca80d239e0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:21 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7200fb2fbea5ecccb0a33f872e2aad9936bcf9333567edaf4729ca80d239e0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:21 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7200fb2fbea5ecccb0a33f872e2aad9936bcf9333567edaf4729ca80d239e0a/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:21 np0005536586 podman[90099]: 2025-11-26 12:38:21.16610389 +0000 UTC m=+0.069936932 container init 75a2026209097baee10bdb84dd454a393aa8cdc3408d582a3a44c49528b2c5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 07:38:21 np0005536586 podman[90099]: 2025-11-26 12:38:21.171562849 +0000 UTC m=+0.075395892 container start 75a2026209097baee10bdb84dd454a393aa8cdc3408d582a3a44c49528b2c5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 07:38:21 np0005536586 podman[90099]: 2025-11-26 12:38:21.172698207 +0000 UTC m=+0.076531249 container attach 75a2026209097baee10bdb84dd454a393aa8cdc3408d582a3a44c49528b2c5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:21 np0005536586 podman[90099]: 2025-11-26 12:38:21.112971674 +0000 UTC m=+0.016804736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 26 07:38:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 07:38:21 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 07:38:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Nov 26 07:38:21 np0005536586 ceph-osd[89328]: osd.1 0 done with init, starting boot process
Nov 26 07:38:21 np0005536586 ceph-osd[89328]: osd.1 0 start_boot
Nov 26 07:38:21 np0005536586 ceph-osd[89328]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 26 07:38:21 np0005536586 ceph-osd[89328]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 26 07:38:21 np0005536586 ceph-osd[89328]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 26 07:38:21 np0005536586 ceph-osd[89328]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 26 07:38:21 np0005536586 ceph-osd[89328]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 26 07:38:21 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Nov 26 07:38:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 07:38:21 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 07:38:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 07:38:21 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 07:38:21 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 07:38:21 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 07:38:21 np0005536586 ceph-mon[74966]: OSD bench result of 19604.877803 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 07:38:21 np0005536586 ceph-mon[74966]: from='osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 26 07:38:21 np0005536586 ceph-mon[74966]: osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798] boot
Nov 26 07:38:21 np0005536586 ceph-mon[74966]: from='osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 07:38:21 np0005536586 ceph-mgr[75236]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/980381060; not ready for session (expect reconnect)
Nov 26 07:38:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 07:38:21 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 07:38:21 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 07:38:21 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 26 07:38:21 np0005536586 ceph-mgr[75236]: [devicehealth INFO root] creating mgr pool
Nov 26 07:38:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 26 07:38:21 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 26 07:38:21 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate[90112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 07:38:21 np0005536586 bash[90099]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 07:38:21 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate[90112]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 26 07:38:21 np0005536586 bash[90099]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 26 07:38:21 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate[90112]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 26 07:38:21 np0005536586 bash[90099]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 26 07:38:21 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate[90112]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 26 07:38:21 np0005536586 bash[90099]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 26 07:38:21 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate[90112]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 26 07:38:21 np0005536586 bash[90099]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 26 07:38:21 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate[90112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 07:38:21 np0005536586 bash[90099]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 07:38:21 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate[90112]: --> ceph-volume raw activate successful for osd ID: 2
Nov 26 07:38:21 np0005536586 bash[90099]: --> ceph-volume raw activate successful for osd ID: 2
Nov 26 07:38:21 np0005536586 systemd[1]: libpod-75a2026209097baee10bdb84dd454a393aa8cdc3408d582a3a44c49528b2c5dd.scope: Deactivated successfully.
Nov 26 07:38:22 np0005536586 podman[90231]: 2025-11-26 12:38:22.012373083 +0000 UTC m=+0.016738710 container died 75a2026209097baee10bdb84dd454a393aa8cdc3408d582a3a44c49528b2c5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 07:38:22 np0005536586 systemd[1]: var-lib-containers-storage-overlay-e7200fb2fbea5ecccb0a33f872e2aad9936bcf9333567edaf4729ca80d239e0a-merged.mount: Deactivated successfully.
Nov 26 07:38:22 np0005536586 podman[90231]: 2025-11-26 12:38:22.078795556 +0000 UTC m=+0.083161163 container remove 75a2026209097baee10bdb84dd454a393aa8cdc3408d582a3a44c49528b2c5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:22 np0005536586 podman[90281]: 2025-11-26 12:38:22.236380699 +0000 UTC m=+0.042166451 container create fad0efe7fb69756136726f3de93d8285c0c8e63a4f5cbbb541e21a1d047a6c06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:22 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2accbd43f6caf537dd8fd5db6997d5544747513a5253a97148cf4129e5a10239/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:22 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2accbd43f6caf537dd8fd5db6997d5544747513a5253a97148cf4129e5a10239/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:22 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2accbd43f6caf537dd8fd5db6997d5544747513a5253a97148cf4129e5a10239/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:22 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2accbd43f6caf537dd8fd5db6997d5544747513a5253a97148cf4129e5a10239/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:22 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2accbd43f6caf537dd8fd5db6997d5544747513a5253a97148cf4129e5a10239/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:22 np0005536586 podman[90281]: 2025-11-26 12:38:22.208337191 +0000 UTC m=+0.014122962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:22 np0005536586 podman[90281]: 2025-11-26 12:38:22.348835055 +0000 UTC m=+0.154620806 container init fad0efe7fb69756136726f3de93d8285c0c8e63a4f5cbbb541e21a1d047a6c06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 07:38:22 np0005536586 podman[90281]: 2025-11-26 12:38:22.353574211 +0000 UTC m=+0.159359962 container start fad0efe7fb69756136726f3de93d8285c0c8e63a4f5cbbb541e21a1d047a6c06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:22 np0005536586 bash[90281]: fad0efe7fb69756136726f3de93d8285c0c8e63a4f5cbbb541e21a1d047a6c06
Nov 26 07:38:22 np0005536586 systemd[1]: Started Ceph osd.2 for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: pidfile_write: ignore empty --pid-file
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: bdev(0x5640ef93d800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: bdev(0x5640ef93d800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: bdev(0x5640ef93d800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: bdev(0x5640ef93d800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: bdev(0x5640f0775000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: bdev(0x5640f0775000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: bdev(0x5640f0775000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: bdev(0x5640f0775000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: bdev(0x5640f0775000 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:22 np0005536586 ceph-mgr[75236]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/980381060; not ready for session (expect reconnect)
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: bdev(0x5640ef93d800 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 07:38:22 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: from='osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:22 np0005536586 ceph-osd[88362]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 26 07:38:22 np0005536586 ceph-osd[88362]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 26 07:38:22 np0005536586 ceph-osd[88362]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 07:38:22 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 07:38:22 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 26 07:38:22 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 26 07:38:22 np0005536586 podman[90444]: 2025-11-26 12:38:22.865968418 +0000 UTC m=+0.028786636 container create ce37fa9acefe881033484d2040591e5384a5648b505ba8a67b14ff8e9dce09cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:22 np0005536586 systemd[1]: Started libpod-conmon-ce37fa9acefe881033484d2040591e5384a5648b505ba8a67b14ff8e9dce09cc.scope.
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: load: jerasure load: lrc 
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 07:38:22 np0005536586 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 07:38:22 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:22 np0005536586 podman[90444]: 2025-11-26 12:38:22.929941466 +0000 UTC m=+0.092759704 container init ce37fa9acefe881033484d2040591e5384a5648b505ba8a67b14ff8e9dce09cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:22 np0005536586 podman[90444]: 2025-11-26 12:38:22.934565274 +0000 UTC m=+0.097383492 container start ce37fa9acefe881033484d2040591e5384a5648b505ba8a67b14ff8e9dce09cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 07:38:22 np0005536586 clever_black[90459]: 167 167
Nov 26 07:38:22 np0005536586 podman[90444]: 2025-11-26 12:38:22.93743296 +0000 UTC m=+0.100251177 container attach ce37fa9acefe881033484d2040591e5384a5648b505ba8a67b14ff8e9dce09cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:22 np0005536586 systemd[1]: libpod-ce37fa9acefe881033484d2040591e5384a5648b505ba8a67b14ff8e9dce09cc.scope: Deactivated successfully.
Nov 26 07:38:22 np0005536586 podman[90444]: 2025-11-26 12:38:22.938929241 +0000 UTC m=+0.101747458 container died ce37fa9acefe881033484d2040591e5384a5648b505ba8a67b14ff8e9dce09cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:22 np0005536586 podman[90444]: 2025-11-26 12:38:22.854023909 +0000 UTC m=+0.016842147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:22 np0005536586 systemd[1]: var-lib-containers-storage-overlay-67311d0ef53154ce01a3063bf6181de6d1bf40a966f0c6d0a6bdb395d83fddc3-merged.mount: Deactivated successfully.
Nov 26 07:38:22 np0005536586 podman[90444]: 2025-11-26 12:38:22.96338876 +0000 UTC m=+0.126206976 container remove ce37fa9acefe881033484d2040591e5384a5648b505ba8a67b14ff8e9dce09cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:22 np0005536586 systemd[1]: libpod-conmon-ce37fa9acefe881033484d2040591e5384a5648b505ba8a67b14ff8e9dce09cc.scope: Deactivated successfully.
Nov 26 07:38:23 np0005536586 podman[90485]: 2025-11-26 12:38:23.084966175 +0000 UTC m=+0.034296580 container create 21d3d07b2bfc45bb1bff278da7d8ee761c63f06af397ee02bf2774f1c9e9117e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 07:38:23 np0005536586 systemd[1]: Started libpod-conmon-21d3d07b2bfc45bb1bff278da7d8ee761c63f06af397ee02bf2774f1c9e9117e.scope.
Nov 26 07:38:23 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:23 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e469978bccff69efe54f4463b4b900209419e8689cf89017551f9148c27b3479/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:23 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e469978bccff69efe54f4463b4b900209419e8689cf89017551f9148c27b3479/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:23 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e469978bccff69efe54f4463b4b900209419e8689cf89017551f9148c27b3479/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:23 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e469978bccff69efe54f4463b4b900209419e8689cf89017551f9148c27b3479/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:23 np0005536586 podman[90485]: 2025-11-26 12:38:23.136255726 +0000 UTC m=+0.085586131 container init 21d3d07b2bfc45bb1bff278da7d8ee761c63f06af397ee02bf2774f1c9e9117e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:23 np0005536586 podman[90485]: 2025-11-26 12:38:23.141523582 +0000 UTC m=+0.090853988 container start 21d3d07b2bfc45bb1bff278da7d8ee761c63f06af397ee02bf2774f1c9e9117e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:23 np0005536586 podman[90485]: 2025-11-26 12:38:23.142610629 +0000 UTC m=+0.091941034 container attach 21d3d07b2bfc45bb1bff278da7d8ee761c63f06af397ee02bf2774f1c9e9117e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:23 np0005536586 podman[90485]: 2025-11-26 12:38:23.072399259 +0000 UTC m=+0.021729684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 07:38:23 np0005536586 ceph-osd[89328]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 97.496 iops: 24958.887 elapsed_sec: 0.120
Nov 26 07:38:23 np0005536586 ceph-osd[89328]: log_channel(cluster) log [WRN] : OSD bench result of 24958.887305 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 07:38:23 np0005536586 ceph-osd[89328]: osd.1 0 waiting for initial osdmap
Nov 26 07:38:23 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1[89324]: 2025-11-26T12:38:23.371+0000 7f9eb0f8d640 -1 osd.1 0 waiting for initial osdmap
Nov 26 07:38:23 np0005536586 ceph-osd[89328]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 26 07:38:23 np0005536586 ceph-osd[89328]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 26 07:38:23 np0005536586 ceph-osd[89328]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 26 07:38:23 np0005536586 ceph-osd[89328]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef
Nov 26 07:38:23 np0005536586 ceph-osd[89328]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 07:38:23 np0005536586 ceph-osd[89328]: osd.1 11 set_numa_affinity not setting numa affinity
Nov 26 07:38:23 np0005536586 ceph-osd[89328]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 26 07:38:23 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1[89324]: 2025-11-26T12:38:23.394+0000 7f9eac5b5640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bdev(0x5640f0958400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bdev(0x5640f0958400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bdev(0x5640f0958400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bdev(0x5640f0958400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluefs mount
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluefs mount shared_bdev_used = 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: RocksDB version: 7.9.2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Git sha 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: DB SUMMARY
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: DB Session ID:  ZFP68MW27DJPUF7WJ9PW
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: CURRENT file:  CURRENT
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                         Options.error_if_exists: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.create_if_missing: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                                     Options.env: 0x5640f07c7c70
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                                Options.info_log: 0x5640ef9c4800
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                              Options.statistics: (nil)
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.use_fsync: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                              Options.db_log_dir: 
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.write_buffer_manager: 0x5640f08d2460
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.unordered_write: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.row_cache: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                              Options.wal_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.two_write_queues: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.wal_compression: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.atomic_flush: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.max_background_jobs: 4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.max_background_compactions: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.max_subcompactions: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.max_open_files: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Compression algorithms supported:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: #011kZSTD supported: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: #011kXpressCompression supported: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: #011kBZip2Compression supported: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: #011kLZ4Compression supported: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: #011kZlibCompression supported: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: #011kSnappyCompression supported: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d949341c-8934-42e1-848d-1fe9b1f3749e
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160703476890, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160703477035, "job": 1, "event": "recovery_finished"}
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: freelist init
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: freelist _read_cfg
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluefs umount
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bdev(0x5640f0958400 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 07:38:23 np0005536586 ceph-mgr[75236]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/980381060; not ready for session (expect reconnect)
Nov 26 07:38:23 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 07:38:23 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 07:38:23 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 07:38:23 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 26 07:38:23 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 26 07:38:23 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Nov 26 07:38:23 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060] boot
Nov 26 07:38:23 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Nov 26 07:38:23 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 07:38:23 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 07:38:23 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 07:38:23 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 07:38:23 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 26 07:38:23 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 26 07:38:23 np0005536586 ceph-osd[89328]: osd.1 12 state: booting -> active
Nov 26 07:38:23 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[11,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:23 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bdev(0x5640f0958400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bdev(0x5640f0958400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bdev(0x5640f0958400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bdev(0x5640f0958400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluefs mount
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluefs mount shared_bdev_used = 4718592
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: RocksDB version: 7.9.2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Git sha 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: DB SUMMARY
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: DB Session ID:  ZFP68MW27DJPUF7WJ9PX
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: CURRENT file:  CURRENT
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                         Options.error_if_exists: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.create_if_missing: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                                     Options.env: 0x5640f0978310
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                                Options.info_log: 0x5640efc8afc0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                              Options.statistics: (nil)
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.use_fsync: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                              Options.db_log_dir: 
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.write_buffer_manager: 0x5640f08d26e0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.unordered_write: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.row_cache: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                              Options.wal_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.two_write_queues: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.wal_compression: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.atomic_flush: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.max_background_jobs: 4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.max_background_compactions: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.max_subcompactions: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.max_open_files: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Compression algorithms supported:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: #011kZSTD supported: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: #011kXpressCompression supported: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: #011kBZip2Compression supported: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: #011kLZ4Compression supported: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: #011kZlibCompression supported: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: #011kSnappyCompression supported: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9baf80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9baf80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9baf80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9baf80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9baf80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9baf80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9baf80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640f07c3c20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640f07c3c20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640f07c3c20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640ef9b1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d949341c-8934-42e1-848d-1fe9b1f3749e
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160703774469, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160703776843, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160703, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d949341c-8934-42e1-848d-1fe9b1f3749e", "db_session_id": "ZFP68MW27DJPUF7WJ9PX", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160703777826, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160703, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d949341c-8934-42e1-848d-1fe9b1f3749e", "db_session_id": "ZFP68MW27DJPUF7WJ9PX", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160703778579, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160703, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d949341c-8934-42e1-848d-1fe9b1f3749e", "db_session_id": "ZFP68MW27DJPUF7WJ9PX", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160703779052, "job": 1, "event": "recovery_finished"}
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5640efb1fc00
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: DB pointer 0x5640f08bba00
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: _get_class not permitted to load lua
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: _get_class not permitted to load sdk
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: _get_class not permitted to load test_remote_reads
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: osd.2 0 load_pgs
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: osd.2 0 load_pgs opened 0 pgs
Nov 26 07:38:23 np0005536586 ceph-osd[90297]: osd.2 0 log_to_monitors true
Nov 26 07:38:23 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2[90293]: 2025-11-26T12:38:23.791+0000 7f7871f80740 -1 osd.2 0 log_to_monitors true
Nov 26 07:38:23 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 26 07:38:23 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 26 07:38:23 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v27: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]: {
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:        "osd_id": 1,
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:        "type": "bluestore"
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:    },
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:        "osd_id": 2,
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:        "type": "bluestore"
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:    },
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:        "osd_id": 0,
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:        "type": "bluestore"
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]:    }
Nov 26 07:38:23 np0005536586 goofy_swanson[90499]: }
Nov 26 07:38:23 np0005536586 systemd[1]: libpod-21d3d07b2bfc45bb1bff278da7d8ee761c63f06af397ee02bf2774f1c9e9117e.scope: Deactivated successfully.
Nov 26 07:38:23 np0005536586 podman[90948]: 2025-11-26 12:38:23.959711457 +0000 UTC m=+0.020069884 container died 21d3d07b2bfc45bb1bff278da7d8ee761c63f06af397ee02bf2774f1c9e9117e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 07:38:23 np0005536586 systemd[1]: var-lib-containers-storage-overlay-e469978bccff69efe54f4463b4b900209419e8689cf89017551f9148c27b3479-merged.mount: Deactivated successfully.
Nov 26 07:38:23 np0005536586 podman[90948]: 2025-11-26 12:38:23.990455434 +0000 UTC m=+0.050813852 container remove 21d3d07b2bfc45bb1bff278da7d8ee761c63f06af397ee02bf2774f1c9e9117e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:23 np0005536586 systemd[1]: libpod-conmon-21d3d07b2bfc45bb1bff278da7d8ee761c63f06af397ee02bf2774f1c9e9117e.scope: Deactivated successfully.
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:24 np0005536586 podman[91167]: 2025-11-26 12:38:24.586790709 +0000 UTC m=+0.036596111 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:24 np0005536586 podman[91167]: 2025-11-26 12:38:24.664950658 +0000 UTC m=+0.114756060 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 07:38:24 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: OSD bench result of 24958.887305 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060] boot
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: from='osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:24 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[11,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 26 07:38:24 np0005536586 ceph-mgr[75236]: [devicehealth INFO root] creating main.db for devicehealth
Nov 26 07:38:24 np0005536586 ceph-mgr[75236]: [devicehealth INFO root] Check health
Nov 26 07:38:24 np0005536586 ceph-mgr[75236]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 07:38:24 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 26 07:38:24 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:38:24 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:25 np0005536586 podman[91411]: 2025-11-26 12:38:25.362946927 +0000 UTC m=+0.028206178 container create be794cf3bec1086b1d28783a41e0846a4fdbe6ffa1cb31d1f1479dc0a83f2fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chebyshev, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 07:38:25 np0005536586 systemd[1]: Started libpod-conmon-be794cf3bec1086b1d28783a41e0846a4fdbe6ffa1cb31d1f1479dc0a83f2fd3.scope.
Nov 26 07:38:25 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:25 np0005536586 podman[91411]: 2025-11-26 12:38:25.406565945 +0000 UTC m=+0.071825206 container init be794cf3bec1086b1d28783a41e0846a4fdbe6ffa1cb31d1f1479dc0a83f2fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chebyshev, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 07:38:25 np0005536586 podman[91411]: 2025-11-26 12:38:25.411476617 +0000 UTC m=+0.076735867 container start be794cf3bec1086b1d28783a41e0846a4fdbe6ffa1cb31d1f1479dc0a83f2fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chebyshev, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:25 np0005536586 podman[91411]: 2025-11-26 12:38:25.412702916 +0000 UTC m=+0.077962167 container attach be794cf3bec1086b1d28783a41e0846a4fdbe6ffa1cb31d1f1479dc0a83f2fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chebyshev, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 07:38:25 np0005536586 jovial_chebyshev[91424]: 167 167
Nov 26 07:38:25 np0005536586 systemd[1]: libpod-be794cf3bec1086b1d28783a41e0846a4fdbe6ffa1cb31d1f1479dc0a83f2fd3.scope: Deactivated successfully.
Nov 26 07:38:25 np0005536586 podman[91411]: 2025-11-26 12:38:25.415522481 +0000 UTC m=+0.080781732 container died be794cf3bec1086b1d28783a41e0846a4fdbe6ffa1cb31d1f1479dc0a83f2fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chebyshev, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:25 np0005536586 systemd[1]: var-lib-containers-storage-overlay-fc6ed5cf83c0f8fcfe4cefbb82d75ef9db79bf723d474c46634aebbb4651b2cb-merged.mount: Deactivated successfully.
Nov 26 07:38:25 np0005536586 podman[91411]: 2025-11-26 12:38:25.434505077 +0000 UTC m=+0.099764328 container remove be794cf3bec1086b1d28783a41e0846a4fdbe6ffa1cb31d1f1479dc0a83f2fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 07:38:25 np0005536586 podman[91411]: 2025-11-26 12:38:25.351090113 +0000 UTC m=+0.016349355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:25 np0005536586 systemd[1]: libpod-conmon-be794cf3bec1086b1d28783a41e0846a4fdbe6ffa1cb31d1f1479dc0a83f2fd3.scope: Deactivated successfully.
Nov 26 07:38:25 np0005536586 podman[91446]: 2025-11-26 12:38:25.548429774 +0000 UTC m=+0.031484339 container create 335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 07:38:25 np0005536586 systemd[1]: Started libpod-conmon-335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde.scope.
Nov 26 07:38:25 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:25 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b397b4d563d8868fb7d42d1210a51eb2efc3d709a32b9625765181fc5d905785/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:25 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b397b4d563d8868fb7d42d1210a51eb2efc3d709a32b9625765181fc5d905785/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:25 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b397b4d563d8868fb7d42d1210a51eb2efc3d709a32b9625765181fc5d905785/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:25 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b397b4d563d8868fb7d42d1210a51eb2efc3d709a32b9625765181fc5d905785/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:25 np0005536586 podman[91446]: 2025-11-26 12:38:25.596826851 +0000 UTC m=+0.079881427 container init 335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 07:38:25 np0005536586 podman[91446]: 2025-11-26 12:38:25.602450412 +0000 UTC m=+0.085504978 container start 335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 07:38:25 np0005536586 podman[91446]: 2025-11-26 12:38:25.603789365 +0000 UTC m=+0.086843931 container attach 335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 07:38:25 np0005536586 podman[91446]: 2025-11-26 12:38:25.534478397 +0000 UTC m=+0.017532974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:25 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 26 07:38:25 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 07:38:25 np0005536586 ceph-osd[90297]: osd.2 0 done with init, starting boot process
Nov 26 07:38:25 np0005536586 ceph-osd[90297]: osd.2 0 start_boot
Nov 26 07:38:25 np0005536586 ceph-osd[90297]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 26 07:38:25 np0005536586 ceph-osd[90297]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 26 07:38:25 np0005536586 ceph-osd[90297]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 26 07:38:25 np0005536586 ceph-osd[90297]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 26 07:38:25 np0005536586 ceph-osd[90297]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 26 07:38:25 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Nov 26 07:38:25 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Nov 26 07:38:25 np0005536586 ceph-mon[74966]: from='osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 26 07:38:25 np0005536586 ceph-mon[74966]: from='osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 07:38:25 np0005536586 ceph-mon[74966]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 26 07:38:25 np0005536586 ceph-mon[74966]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 26 07:38:25 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:25 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:25 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 07:38:25 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 07:38:25 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 07:38:25 np0005536586 ceph-mgr[75236]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4170874453; not ready for session (expect reconnect)
Nov 26 07:38:25 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 07:38:25 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 07:38:25 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 07:38:25 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v30: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 26 07:38:25 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:38:26 np0005536586 jolly_pike[91459]: [
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:    {
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:        "available": false,
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:        "ceph_device": false,
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:        "lsm_data": {},
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:        "lvs": [],
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:        "path": "/dev/sr0",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:        "rejected_reasons": [
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "Has a FileSystem",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "Insufficient space (<5GB)"
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:        ],
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:        "sys_api": {
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "actuators": null,
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "device_nodes": "sr0",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "devname": "sr0",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "human_readable_size": "474.00 KB",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "id_bus": "ata",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "model": "QEMU DVD-ROM",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "nr_requests": "64",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "parent": "/dev/sr0",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "partitions": {},
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "path": "/dev/sr0",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "removable": "1",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "rev": "2.5+",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "ro": "0",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "rotational": "1",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "sas_address": "",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "sas_device_handle": "",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "scheduler_mode": "mq-deadline",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "sectors": 0,
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "sectorsize": "2048",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "size": 485376.0,
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "support_discard": "2048",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "type": "disk",
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:            "vendor": "QEMU"
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:        }
Nov 26 07:38:26 np0005536586 jolly_pike[91459]:    }
Nov 26 07:38:26 np0005536586 jolly_pike[91459]: ]
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.whkbdn(active, since 50s)
Nov 26 07:38:26 np0005536586 ceph-mgr[75236]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4170874453; not ready for session (expect reconnect)
Nov 26 07:38:26 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: from='osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 07:38:26 np0005536586 systemd[1]: libpod-335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde.scope: Deactivated successfully.
Nov 26 07:38:26 np0005536586 systemd[1]: libpod-335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde.scope: Consumed 1.105s CPU time.
Nov 26 07:38:26 np0005536586 conmon[91459]: conmon 335c28c3ce37bceca369 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde.scope/container/memory.events
Nov 26 07:38:26 np0005536586 podman[91446]: 2025-11-26 12:38:26.697820678 +0000 UTC m=+1.180875254 container died 335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 07:38:26 np0005536586 systemd[1]: var-lib-containers-storage-overlay-b397b4d563d8868fb7d42d1210a51eb2efc3d709a32b9625765181fc5d905785-merged.mount: Deactivated successfully.
Nov 26 07:38:26 np0005536586 podman[91446]: 2025-11-26 12:38:26.738445901 +0000 UTC m=+1.221500477 container remove 335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:26 np0005536586 systemd[1]: libpod-conmon-335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde.scope: Deactivated successfully.
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 26 07:38:26 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43934k
Nov 26 07:38:26 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43934k
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 26 07:38:26 np0005536586 ceph-mgr[75236]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44988689: error parsing value: Value '44988689' is below minimum 939524096
Nov 26 07:38:26 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44988689: error parsing value: Value '44988689' is below minimum 939524096
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:26 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 5fb42466-2929-4349-b9f1-e136a8fb6ead does not exist
Nov 26 07:38:26 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 1b21a0fb-440b-4d28-844a-7a9083e2eb9e does not exist
Nov 26 07:38:26 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev be3f3deb-1d51-4596-a12b-5a8135e5c86d does not exist
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:38:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:38:27 np0005536586 podman[93425]: 2025-11-26 12:38:27.225937482 +0000 UTC m=+0.030535424 container create d7fc4f39bcb6732ac9132831ecf020278da37323ef7f267dc6c7df4e2ad5ce6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 07:38:27 np0005536586 systemd[1]: Started libpod-conmon-d7fc4f39bcb6732ac9132831ecf020278da37323ef7f267dc6c7df4e2ad5ce6f.scope.
Nov 26 07:38:27 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:27 np0005536586 podman[93425]: 2025-11-26 12:38:27.283297909 +0000 UTC m=+0.087895870 container init d7fc4f39bcb6732ac9132831ecf020278da37323ef7f267dc6c7df4e2ad5ce6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:27 np0005536586 podman[93425]: 2025-11-26 12:38:27.289257214 +0000 UTC m=+0.093855156 container start d7fc4f39bcb6732ac9132831ecf020278da37323ef7f267dc6c7df4e2ad5ce6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 07:38:27 np0005536586 stupefied_wozniak[93438]: 167 167
Nov 26 07:38:27 np0005536586 systemd[1]: libpod-d7fc4f39bcb6732ac9132831ecf020278da37323ef7f267dc6c7df4e2ad5ce6f.scope: Deactivated successfully.
Nov 26 07:38:27 np0005536586 podman[93425]: 2025-11-26 12:38:27.295122121 +0000 UTC m=+0.099720073 container attach d7fc4f39bcb6732ac9132831ecf020278da37323ef7f267dc6c7df4e2ad5ce6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 07:38:27 np0005536586 podman[93425]: 2025-11-26 12:38:27.295294628 +0000 UTC m=+0.099892569 container died d7fc4f39bcb6732ac9132831ecf020278da37323ef7f267dc6c7df4e2ad5ce6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 07:38:27 np0005536586 podman[93425]: 2025-11-26 12:38:27.215934357 +0000 UTC m=+0.020532318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:27 np0005536586 systemd[1]: var-lib-containers-storage-overlay-3610f11a5a79c4a90e871ffe5bd7654395e732ff8e5d853f09ca320713581b9e-merged.mount: Deactivated successfully.
Nov 26 07:38:27 np0005536586 podman[93425]: 2025-11-26 12:38:27.319580235 +0000 UTC m=+0.124178178 container remove d7fc4f39bcb6732ac9132831ecf020278da37323ef7f267dc6c7df4e2ad5ce6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 07:38:27 np0005536586 systemd[1]: libpod-conmon-d7fc4f39bcb6732ac9132831ecf020278da37323ef7f267dc6c7df4e2ad5ce6f.scope: Deactivated successfully.
Nov 26 07:38:27 np0005536586 ceph-osd[90297]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 92.999 iops: 23807.863 elapsed_sec: 0.126
Nov 26 07:38:27 np0005536586 ceph-osd[90297]: log_channel(cluster) log [WRN] : OSD bench result of 23807.862739 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 07:38:27 np0005536586 ceph-osd[90297]: osd.2 0 waiting for initial osdmap
Nov 26 07:38:27 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2[90293]: 2025-11-26T12:38:27.416+0000 7f786df00640 -1 osd.2 0 waiting for initial osdmap
Nov 26 07:38:27 np0005536586 ceph-osd[90297]: osd.2 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 26 07:38:27 np0005536586 ceph-osd[90297]: osd.2 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 26 07:38:27 np0005536586 ceph-osd[90297]: osd.2 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 26 07:38:27 np0005536586 ceph-osd[90297]: osd.2 14 check_osdmap_features require_osd_release unknown -> reef
Nov 26 07:38:27 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2[90293]: 2025-11-26T12:38:27.430+0000 7f7869528640 -1 osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 07:38:27 np0005536586 ceph-osd[90297]: osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 07:38:27 np0005536586 ceph-osd[90297]: osd.2 14 set_numa_affinity not setting numa affinity
Nov 26 07:38:27 np0005536586 podman[93460]: 2025-11-26 12:38:27.433537625 +0000 UTC m=+0.030606258 container create 7cbacc41df212f8721f22f77ba34a0d2fad1185c5e59f6e52402a88496c42b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williams, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:27 np0005536586 ceph-osd[90297]: osd.2 14 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 26 07:38:27 np0005536586 systemd[1]: Started libpod-conmon-7cbacc41df212f8721f22f77ba34a0d2fad1185c5e59f6e52402a88496c42b8d.scope.
Nov 26 07:38:27 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:27 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26197d58d3c0dd76a98d701fa89fc2549cb4c9602a5da4b2950120bf048d18fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:27 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26197d58d3c0dd76a98d701fa89fc2549cb4c9602a5da4b2950120bf048d18fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:27 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26197d58d3c0dd76a98d701fa89fc2549cb4c9602a5da4b2950120bf048d18fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:27 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26197d58d3c0dd76a98d701fa89fc2549cb4c9602a5da4b2950120bf048d18fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:27 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26197d58d3c0dd76a98d701fa89fc2549cb4c9602a5da4b2950120bf048d18fd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:27 np0005536586 podman[93460]: 2025-11-26 12:38:27.48625724 +0000 UTC m=+0.083325884 container init 7cbacc41df212f8721f22f77ba34a0d2fad1185c5e59f6e52402a88496c42b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williams, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 07:38:27 np0005536586 podman[93460]: 2025-11-26 12:38:27.492973799 +0000 UTC m=+0.090042443 container start 7cbacc41df212f8721f22f77ba34a0d2fad1185c5e59f6e52402a88496c42b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williams, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:27 np0005536586 podman[93460]: 2025-11-26 12:38:27.496220601 +0000 UTC m=+0.093289236 container attach 7cbacc41df212f8721f22f77ba34a0d2fad1185c5e59f6e52402a88496c42b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 07:38:27 np0005536586 podman[93460]: 2025-11-26 12:38:27.421892662 +0000 UTC m=+0.018961317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:27 np0005536586 ceph-mgr[75236]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4170874453; not ready for session (expect reconnect)
Nov 26 07:38:27 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 07:38:27 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 07:38:27 np0005536586 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 07:38:27 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:27 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:27 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 26 07:38:27 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 26 07:38:27 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 26 07:38:27 np0005536586 ceph-mon[74966]: Adjusting osd_memory_target on compute-0 to 43934k
Nov 26 07:38:27 np0005536586 ceph-mon[74966]: Unable to set osd_memory_target on compute-0 to 44988689: error parsing value: Value '44988689' is below minimum 939524096
Nov 26 07:38:27 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:38:27 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:27 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:38:27 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 26 07:38:27 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e15 e15: 3 total, 3 up, 3 in
Nov 26 07:38:27 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453] boot
Nov 26 07:38:27 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 3 up, 3 in
Nov 26 07:38:27 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 07:38:27 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 07:38:27 np0005536586 ceph-osd[90297]: osd.2 15 state: booting -> active
Nov 26 07:38:27 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v32: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 26 07:38:28 np0005536586 elated_williams[93475]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:38:28 np0005536586 elated_williams[93475]: --> relative data size: 1.0
Nov 26 07:38:28 np0005536586 elated_williams[93475]: --> All data devices are unavailable
Nov 26 07:38:28 np0005536586 systemd[1]: libpod-7cbacc41df212f8721f22f77ba34a0d2fad1185c5e59f6e52402a88496c42b8d.scope: Deactivated successfully.
Nov 26 07:38:28 np0005536586 podman[93460]: 2025-11-26 12:38:28.308120598 +0000 UTC m=+0.905189242 container died 7cbacc41df212f8721f22f77ba34a0d2fad1185c5e59f6e52402a88496c42b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 07:38:28 np0005536586 systemd[1]: var-lib-containers-storage-overlay-26197d58d3c0dd76a98d701fa89fc2549cb4c9602a5da4b2950120bf048d18fd-merged.mount: Deactivated successfully.
Nov 26 07:38:28 np0005536586 podman[93460]: 2025-11-26 12:38:28.337730771 +0000 UTC m=+0.934799405 container remove 7cbacc41df212f8721f22f77ba34a0d2fad1185c5e59f6e52402a88496c42b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williams, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 07:38:28 np0005536586 systemd[1]: libpod-conmon-7cbacc41df212f8721f22f77ba34a0d2fad1185c5e59f6e52402a88496c42b8d.scope: Deactivated successfully.
Nov 26 07:38:28 np0005536586 ceph-mon[74966]: OSD bench result of 23807.862739 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 07:38:28 np0005536586 ceph-mon[74966]: osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453] boot
Nov 26 07:38:28 np0005536586 podman[93645]: 2025-11-26 12:38:28.761460594 +0000 UTC m=+0.037251000 container create a730a96d873ecbdd0dce514b29e6dfabbba8b9b917258cc039bd02e46169c9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:28 np0005536586 systemd[1]: Started libpod-conmon-a730a96d873ecbdd0dce514b29e6dfabbba8b9b917258cc039bd02e46169c9e8.scope.
Nov 26 07:38:28 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:28 np0005536586 podman[93645]: 2025-11-26 12:38:28.807476196 +0000 UTC m=+0.083266613 container init a730a96d873ecbdd0dce514b29e6dfabbba8b9b917258cc039bd02e46169c9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mahavira, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:28 np0005536586 podman[93645]: 2025-11-26 12:38:28.812160788 +0000 UTC m=+0.087951196 container start a730a96d873ecbdd0dce514b29e6dfabbba8b9b917258cc039bd02e46169c9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 07:38:28 np0005536586 podman[93645]: 2025-11-26 12:38:28.813527214 +0000 UTC m=+0.089317622 container attach a730a96d873ecbdd0dce514b29e6dfabbba8b9b917258cc039bd02e46169c9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mahavira, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:28 np0005536586 jovial_mahavira[93658]: 167 167
Nov 26 07:38:28 np0005536586 systemd[1]: libpod-a730a96d873ecbdd0dce514b29e6dfabbba8b9b917258cc039bd02e46169c9e8.scope: Deactivated successfully.
Nov 26 07:38:28 np0005536586 podman[93645]: 2025-11-26 12:38:28.815786599 +0000 UTC m=+0.091577006 container died a730a96d873ecbdd0dce514b29e6dfabbba8b9b917258cc039bd02e46169c9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mahavira, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 07:38:28 np0005536586 systemd[1]: var-lib-containers-storage-overlay-5a3057ea3ca8ef41a91bd8c054737a4fa7ac7e14ea3c96f31c2a6a3e78a2e006-merged.mount: Deactivated successfully.
Nov 26 07:38:28 np0005536586 podman[93645]: 2025-11-26 12:38:28.832147174 +0000 UTC m=+0.107937580 container remove a730a96d873ecbdd0dce514b29e6dfabbba8b9b917258cc039bd02e46169c9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:28 np0005536586 podman[93645]: 2025-11-26 12:38:28.75054934 +0000 UTC m=+0.026339757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:28 np0005536586 systemd[1]: libpod-conmon-a730a96d873ecbdd0dce514b29e6dfabbba8b9b917258cc039bd02e46169c9e8.scope: Deactivated successfully.
Nov 26 07:38:28 np0005536586 podman[93680]: 2025-11-26 12:38:28.94455942 +0000 UTC m=+0.027507314 container create ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 07:38:28 np0005536586 systemd[1]: Started libpod-conmon-ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a.scope.
Nov 26 07:38:28 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:28 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706cd7cac267c550c09c15045cf05dd92b1ad08eeba34c07aeb1d68c175c7957/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:28 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706cd7cac267c550c09c15045cf05dd92b1ad08eeba34c07aeb1d68c175c7957/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:28 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706cd7cac267c550c09c15045cf05dd92b1ad08eeba34c07aeb1d68c175c7957/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:28 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706cd7cac267c550c09c15045cf05dd92b1ad08eeba34c07aeb1d68c175c7957/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:28 np0005536586 podman[93680]: 2025-11-26 12:38:28.994111874 +0000 UTC m=+0.077059779 container init ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 26 07:38:29 np0005536586 podman[93680]: 2025-11-26 12:38:29.000098119 +0000 UTC m=+0.083046014 container start ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 07:38:29 np0005536586 podman[93680]: 2025-11-26 12:38:29.001382029 +0000 UTC m=+0.084329944 container attach ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:29 np0005536586 podman[93680]: 2025-11-26 12:38:28.933024344 +0000 UTC m=+0.015972260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]: {
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:    "0": [
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:        {
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "devices": [
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "/dev/loop3"
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            ],
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "lv_name": "ceph_lv0",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "lv_size": "21470642176",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "name": "ceph_lv0",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "tags": {
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.cluster_name": "ceph",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.crush_device_class": "",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.encrypted": "0",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.osd_id": "0",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.type": "block",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.vdo": "0"
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            },
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "type": "block",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "vg_name": "ceph_vg0"
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:        }
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:    ],
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:    "1": [
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:        {
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "devices": [
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "/dev/loop4"
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            ],
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "lv_name": "ceph_lv1",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "lv_size": "21470642176",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "name": "ceph_lv1",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "tags": {
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.cluster_name": "ceph",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.crush_device_class": "",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.encrypted": "0",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.osd_id": "1",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.type": "block",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.vdo": "0"
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            },
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "type": "block",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "vg_name": "ceph_vg1"
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:        }
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:    ],
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:    "2": [
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:        {
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "devices": [
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "/dev/loop5"
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            ],
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "lv_name": "ceph_lv2",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "lv_size": "21470642176",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "name": "ceph_lv2",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "tags": {
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.cluster_name": "ceph",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.crush_device_class": "",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.encrypted": "0",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.osd_id": "2",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.type": "block",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:                "ceph.vdo": "0"
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            },
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "type": "block",
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:            "vg_name": "ceph_vg2"
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:        }
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]:    ]
Nov 26 07:38:29 np0005536586 peaceful_liskov[93693]: }
Nov 26 07:38:29 np0005536586 systemd[1]: libpod-ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a.scope: Deactivated successfully.
Nov 26 07:38:29 np0005536586 conmon[93693]: conmon ec03ea21a1c3cfaa1ff8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a.scope/container/memory.events
Nov 26 07:38:29 np0005536586 podman[93680]: 2025-11-26 12:38:29.637979059 +0000 UTC m=+0.720926953 container died ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:29 np0005536586 systemd[1]: var-lib-containers-storage-overlay-706cd7cac267c550c09c15045cf05dd92b1ad08eeba34c07aeb1d68c175c7957-merged.mount: Deactivated successfully.
Nov 26 07:38:29 np0005536586 podman[93680]: 2025-11-26 12:38:29.669433942 +0000 UTC m=+0.752381837 container remove ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:29 np0005536586 systemd[1]: libpod-conmon-ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a.scope: Deactivated successfully.
Nov 26 07:38:29 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 26 07:38:29 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Nov 26 07:38:29 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Nov 26 07:38:29 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Nov 26 07:38:30 np0005536586 python3[93837]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:30 np0005536586 podman[93868]: 2025-11-26 12:38:30.087022175 +0000 UTC m=+0.028171221 container create c1433874aab6f870a426e379022d4f24eea7c804aa3e1bc38f6240aaf01028f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_keller, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:30 np0005536586 podman[93876]: 2025-11-26 12:38:30.107238445 +0000 UTC m=+0.032125994 container create fb40e672489f5d671cfc1dfbaff9f6c6e395c63dbde47c175c38439ee73fab5f (image=quay.io/ceph/ceph:v18, name=nifty_hermann, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:30 np0005536586 systemd[1]: Started libpod-conmon-c1433874aab6f870a426e379022d4f24eea7c804aa3e1bc38f6240aaf01028f2.scope.
Nov 26 07:38:30 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:30 np0005536586 systemd[1]: Started libpod-conmon-fb40e672489f5d671cfc1dfbaff9f6c6e395c63dbde47c175c38439ee73fab5f.scope.
Nov 26 07:38:30 np0005536586 podman[93868]: 2025-11-26 12:38:30.148867638 +0000 UTC m=+0.090016694 container init c1433874aab6f870a426e379022d4f24eea7c804aa3e1bc38f6240aaf01028f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 07:38:30 np0005536586 podman[93868]: 2025-11-26 12:38:30.15335611 +0000 UTC m=+0.094505156 container start c1433874aab6f870a426e379022d4f24eea7c804aa3e1bc38f6240aaf01028f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_keller, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 07:38:30 np0005536586 sharp_keller[93892]: 167 167
Nov 26 07:38:30 np0005536586 podman[93868]: 2025-11-26 12:38:30.156339285 +0000 UTC m=+0.097488351 container attach c1433874aab6f870a426e379022d4f24eea7c804aa3e1bc38f6240aaf01028f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_keller, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 26 07:38:30 np0005536586 podman[93868]: 2025-11-26 12:38:30.15727249 +0000 UTC m=+0.098421586 container died c1433874aab6f870a426e379022d4f24eea7c804aa3e1bc38f6240aaf01028f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_keller, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 07:38:30 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:30 np0005536586 systemd[1]: libpod-c1433874aab6f870a426e379022d4f24eea7c804aa3e1bc38f6240aaf01028f2.scope: Deactivated successfully.
Nov 26 07:38:30 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4a7c5ecbe6c449ee9569c74e5ff6b3d6823bf0030c57ea62cff236e1310218/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:30 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4a7c5ecbe6c449ee9569c74e5ff6b3d6823bf0030c57ea62cff236e1310218/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:30 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4a7c5ecbe6c449ee9569c74e5ff6b3d6823bf0030c57ea62cff236e1310218/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:30 np0005536586 systemd[1]: var-lib-containers-storage-overlay-b8a47751961abb2faf8db3e1ac28aa61078b2296bb91cbb17cc390816c1377ec-merged.mount: Deactivated successfully.
Nov 26 07:38:30 np0005536586 podman[93868]: 2025-11-26 12:38:30.075643997 +0000 UTC m=+0.016793064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:30 np0005536586 podman[93876]: 2025-11-26 12:38:30.173793477 +0000 UTC m=+0.098681046 container init fb40e672489f5d671cfc1dfbaff9f6c6e395c63dbde47c175c38439ee73fab5f (image=quay.io/ceph/ceph:v18, name=nifty_hermann, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:30 np0005536586 podman[93876]: 2025-11-26 12:38:30.182807462 +0000 UTC m=+0.107695011 container start fb40e672489f5d671cfc1dfbaff9f6c6e395c63dbde47c175c38439ee73fab5f (image=quay.io/ceph/ceph:v18, name=nifty_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 07:38:30 np0005536586 podman[93868]: 2025-11-26 12:38:30.184228992 +0000 UTC m=+0.125378038 container remove c1433874aab6f870a426e379022d4f24eea7c804aa3e1bc38f6240aaf01028f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:30 np0005536586 podman[93876]: 2025-11-26 12:38:30.189477241 +0000 UTC m=+0.114364810 container attach fb40e672489f5d671cfc1dfbaff9f6c6e395c63dbde47c175c38439ee73fab5f (image=quay.io/ceph/ceph:v18, name=nifty_hermann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:30 np0005536586 podman[93876]: 2025-11-26 12:38:30.095651191 +0000 UTC m=+0.020538760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:30 np0005536586 systemd[1]: libpod-conmon-c1433874aab6f870a426e379022d4f24eea7c804aa3e1bc38f6240aaf01028f2.scope: Deactivated successfully.
Nov 26 07:38:30 np0005536586 podman[93920]: 2025-11-26 12:38:30.291566456 +0000 UTC m=+0.026262971 container create 09df7f9035770f131260fa1f987d27638b985ebd6d2ee103b739ed3634ffb242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brown, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:30 np0005536586 systemd[1]: Started libpod-conmon-09df7f9035770f131260fa1f987d27638b985ebd6d2ee103b739ed3634ffb242.scope.
Nov 26 07:38:30 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:30 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e9048fec6d6bee188a7143c831a92eec706e8da86b859787150b77bada1e993/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:30 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e9048fec6d6bee188a7143c831a92eec706e8da86b859787150b77bada1e993/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:30 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e9048fec6d6bee188a7143c831a92eec706e8da86b859787150b77bada1e993/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:30 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e9048fec6d6bee188a7143c831a92eec706e8da86b859787150b77bada1e993/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:30 np0005536586 podman[93920]: 2025-11-26 12:38:30.353093286 +0000 UTC m=+0.087789821 container init 09df7f9035770f131260fa1f987d27638b985ebd6d2ee103b739ed3634ffb242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brown, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:30 np0005536586 podman[93920]: 2025-11-26 12:38:30.357547883 +0000 UTC m=+0.092244399 container start 09df7f9035770f131260fa1f987d27638b985ebd6d2ee103b739ed3634ffb242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:30 np0005536586 podman[93920]: 2025-11-26 12:38:30.359088729 +0000 UTC m=+0.093785244 container attach 09df7f9035770f131260fa1f987d27638b985ebd6d2ee103b739ed3634ffb242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 07:38:30 np0005536586 podman[93920]: 2025-11-26 12:38:30.280511239 +0000 UTC m=+0.015207774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:30 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 26 07:38:30 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3764092662' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 07:38:30 np0005536586 nifty_hermann[93897]: 
Nov 26 07:38:30 np0005536586 nifty_hermann[93897]: {"fsid":"f7d7fe93-41e5-51c4-b72d-63b38686102e","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":94,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":3,"osd_up_since":1764160707,"num_in_osds":3,"osd_in_since":1764160688,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"creating+peering","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":474587136,"bytes_avail":42466697216,"bytes_total":42941284352,"inactive_pgs_ratio":1},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-26T12:36:53.922147+0000","services":{}},"progress_events":{}}
Nov 26 07:38:30 np0005536586 systemd[1]: libpod-fb40e672489f5d671cfc1dfbaff9f6c6e395c63dbde47c175c38439ee73fab5f.scope: Deactivated successfully.
Nov 26 07:38:30 np0005536586 podman[93876]: 2025-11-26 12:38:30.677164784 +0000 UTC m=+0.602052333 container died fb40e672489f5d671cfc1dfbaff9f6c6e395c63dbde47c175c38439ee73fab5f (image=quay.io/ceph/ceph:v18, name=nifty_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 07:38:30 np0005536586 systemd[1]: var-lib-containers-storage-overlay-4f4a7c5ecbe6c449ee9569c74e5ff6b3d6823bf0030c57ea62cff236e1310218-merged.mount: Deactivated successfully.
Nov 26 07:38:30 np0005536586 podman[93876]: 2025-11-26 12:38:30.700692588 +0000 UTC m=+0.625580137 container remove fb40e672489f5d671cfc1dfbaff9f6c6e395c63dbde47c175c38439ee73fab5f (image=quay.io/ceph/ceph:v18, name=nifty_hermann, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:30 np0005536586 systemd[1]: libpod-conmon-fb40e672489f5d671cfc1dfbaff9f6c6e395c63dbde47c175c38439ee73fab5f.scope: Deactivated successfully.
Nov 26 07:38:30 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:38:31 np0005536586 python3[94000]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:31 np0005536586 podman[94016]: 2025-11-26 12:38:31.100558681 +0000 UTC m=+0.031601111 container create 759e401f3365703fc94d8f6495204699de952d55d6c931f76df77d10ab9d47e2 (image=quay.io/ceph/ceph:v18, name=exciting_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:31 np0005536586 silly_brown[93934]: {
Nov 26 07:38:31 np0005536586 silly_brown[93934]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:38:31 np0005536586 silly_brown[93934]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:31 np0005536586 silly_brown[93934]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:38:31 np0005536586 silly_brown[93934]:        "osd_id": 1,
Nov 26 07:38:31 np0005536586 silly_brown[93934]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:38:31 np0005536586 silly_brown[93934]:        "type": "bluestore"
Nov 26 07:38:31 np0005536586 silly_brown[93934]:    },
Nov 26 07:38:31 np0005536586 silly_brown[93934]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:38:31 np0005536586 silly_brown[93934]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:31 np0005536586 silly_brown[93934]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:38:31 np0005536586 silly_brown[93934]:        "osd_id": 2,
Nov 26 07:38:31 np0005536586 silly_brown[93934]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:38:31 np0005536586 silly_brown[93934]:        "type": "bluestore"
Nov 26 07:38:31 np0005536586 silly_brown[93934]:    },
Nov 26 07:38:31 np0005536586 silly_brown[93934]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:38:31 np0005536586 silly_brown[93934]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:31 np0005536586 silly_brown[93934]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:38:31 np0005536586 silly_brown[93934]:        "osd_id": 0,
Nov 26 07:38:31 np0005536586 silly_brown[93934]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:38:31 np0005536586 silly_brown[93934]:        "type": "bluestore"
Nov 26 07:38:31 np0005536586 silly_brown[93934]:    }
Nov 26 07:38:31 np0005536586 silly_brown[93934]: }
Nov 26 07:38:31 np0005536586 systemd[1]: Started libpod-conmon-759e401f3365703fc94d8f6495204699de952d55d6c931f76df77d10ab9d47e2.scope.
Nov 26 07:38:31 np0005536586 podman[93920]: 2025-11-26 12:38:31.140313738 +0000 UTC m=+0.875010252 container died 09df7f9035770f131260fa1f987d27638b985ebd6d2ee103b739ed3634ffb242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brown, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:31 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:31 np0005536586 systemd[1]: libpod-09df7f9035770f131260fa1f987d27638b985ebd6d2ee103b739ed3634ffb242.scope: Deactivated successfully.
Nov 26 07:38:31 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4ae793a83ee8bafc1a816f51014ba0aad83e8e33478b075fb0ec6469b9ed212/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:31 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4ae793a83ee8bafc1a816f51014ba0aad83e8e33478b075fb0ec6469b9ed212/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:31 np0005536586 podman[94016]: 2025-11-26 12:38:31.158382885 +0000 UTC m=+0.089425325 container init 759e401f3365703fc94d8f6495204699de952d55d6c931f76df77d10ab9d47e2 (image=quay.io/ceph/ceph:v18, name=exciting_bartik, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:31 np0005536586 systemd[1]: var-lib-containers-storage-overlay-0e9048fec6d6bee188a7143c831a92eec706e8da86b859787150b77bada1e993-merged.mount: Deactivated successfully.
Nov 26 07:38:31 np0005536586 podman[94016]: 2025-11-26 12:38:31.162981836 +0000 UTC m=+0.094024256 container start 759e401f3365703fc94d8f6495204699de952d55d6c931f76df77d10ab9d47e2 (image=quay.io/ceph/ceph:v18, name=exciting_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 07:38:31 np0005536586 podman[94016]: 2025-11-26 12:38:31.165484532 +0000 UTC m=+0.096526952 container attach 759e401f3365703fc94d8f6495204699de952d55d6c931f76df77d10ab9d47e2 (image=quay.io/ceph/ceph:v18, name=exciting_bartik, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 07:38:31 np0005536586 podman[93920]: 2025-11-26 12:38:31.177111079 +0000 UTC m=+0.911807594 container remove 09df7f9035770f131260fa1f987d27638b985ebd6d2ee103b739ed3634ffb242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brown, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:31 np0005536586 podman[94016]: 2025-11-26 12:38:31.089505067 +0000 UTC m=+0.020547506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:31 np0005536586 systemd[1]: libpod-conmon-09df7f9035770f131260fa1f987d27638b985ebd6d2ee103b739ed3634ffb242.scope: Deactivated successfully.
Nov 26 07:38:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:38:31 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:38:31 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 07:38:31 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4174859022' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 07:38:31 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Nov 26 07:38:32 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:32 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:32 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/4174859022' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 07:38:32 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 26 07:38:32 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4174859022' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 07:38:32 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Nov 26 07:38:32 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Nov 26 07:38:32 np0005536586 exciting_bartik[94037]: pool 'vms' created
Nov 26 07:38:32 np0005536586 systemd[1]: libpod-759e401f3365703fc94d8f6495204699de952d55d6c931f76df77d10ab9d47e2.scope: Deactivated successfully.
Nov 26 07:38:32 np0005536586 podman[94016]: 2025-11-26 12:38:32.228804538 +0000 UTC m=+1.159846968 container died 759e401f3365703fc94d8f6495204699de952d55d6c931f76df77d10ab9d47e2 (image=quay.io/ceph/ceph:v18, name=exciting_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 07:38:32 np0005536586 systemd[1]: var-lib-containers-storage-overlay-a4ae793a83ee8bafc1a816f51014ba0aad83e8e33478b075fb0ec6469b9ed212-merged.mount: Deactivated successfully.
Nov 26 07:38:32 np0005536586 podman[94016]: 2025-11-26 12:38:32.24925979 +0000 UTC m=+1.180302210 container remove 759e401f3365703fc94d8f6495204699de952d55d6c931f76df77d10ab9d47e2 (image=quay.io/ceph/ceph:v18, name=exciting_bartik, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 07:38:32 np0005536586 systemd[1]: libpod-conmon-759e401f3365703fc94d8f6495204699de952d55d6c931f76df77d10ab9d47e2.scope: Deactivated successfully.
Nov 26 07:38:32 np0005536586 python3[94158]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:32 np0005536586 podman[94159]: 2025-11-26 12:38:32.500885769 +0000 UTC m=+0.026874999 container create db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f (image=quay.io/ceph/ceph:v18, name=quirky_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 07:38:32 np0005536586 systemd[1]: Started libpod-conmon-db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f.scope.
Nov 26 07:38:32 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:32 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e0df6eab22c2dccc1257731891afcec6eb7910508e236b71fb15d844e67241/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:32 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e0df6eab22c2dccc1257731891afcec6eb7910508e236b71fb15d844e67241/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:32 np0005536586 podman[94159]: 2025-11-26 12:38:32.551239439 +0000 UTC m=+0.077228658 container init db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f (image=quay.io/ceph/ceph:v18, name=quirky_perlman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:32 np0005536586 podman[94159]: 2025-11-26 12:38:32.555329117 +0000 UTC m=+0.081318336 container start db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f (image=quay.io/ceph/ceph:v18, name=quirky_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 07:38:32 np0005536586 podman[94159]: 2025-11-26 12:38:32.556475476 +0000 UTC m=+0.082464695 container attach db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f (image=quay.io/ceph/ceph:v18, name=quirky_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 07:38:32 np0005536586 podman[94159]: 2025-11-26 12:38:32.49014092 +0000 UTC m=+0.016130159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:32 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:32 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 07:38:32 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3228469465' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 07:38:33 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/4174859022' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 07:38:33 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/3228469465' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 07:38:33 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 26 07:38:33 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3228469465' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 07:38:33 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Nov 26 07:38:33 np0005536586 quirky_perlman[94171]: pool 'volumes' created
Nov 26 07:38:33 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Nov 26 07:38:33 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:33 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:33 np0005536586 systemd[1]: libpod-db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f.scope: Deactivated successfully.
Nov 26 07:38:33 np0005536586 conmon[94171]: conmon db8b0728b3576c35b751 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f.scope/container/memory.events
Nov 26 07:38:33 np0005536586 podman[94159]: 2025-11-26 12:38:33.237541323 +0000 UTC m=+0.763530542 container died db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f (image=quay.io/ceph/ceph:v18, name=quirky_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 07:38:33 np0005536586 systemd[1]: var-lib-containers-storage-overlay-46e0df6eab22c2dccc1257731891afcec6eb7910508e236b71fb15d844e67241-merged.mount: Deactivated successfully.
Nov 26 07:38:33 np0005536586 podman[94159]: 2025-11-26 12:38:33.259983544 +0000 UTC m=+0.785972763 container remove db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f (image=quay.io/ceph/ceph:v18, name=quirky_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 07:38:33 np0005536586 systemd[1]: libpod-conmon-db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f.scope: Deactivated successfully.
Nov 26 07:38:33 np0005536586 python3[94233]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:33 np0005536586 podman[94234]: 2025-11-26 12:38:33.506641344 +0000 UTC m=+0.027825558 container create 5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb (image=quay.io/ceph/ceph:v18, name=relaxed_bell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:33 np0005536586 systemd[1]: Started libpod-conmon-5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb.scope.
Nov 26 07:38:33 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:33 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c1350c6b56a9c670f0f696969d57b53cae8b155b00a255e28606644f4d29abb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:33 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c1350c6b56a9c670f0f696969d57b53cae8b155b00a255e28606644f4d29abb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:33 np0005536586 podman[94234]: 2025-11-26 12:38:33.567043645 +0000 UTC m=+0.088227869 container init 5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb (image=quay.io/ceph/ceph:v18, name=relaxed_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:33 np0005536586 podman[94234]: 2025-11-26 12:38:33.571351786 +0000 UTC m=+0.092535991 container start 5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb (image=quay.io/ceph/ceph:v18, name=relaxed_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 26 07:38:33 np0005536586 podman[94234]: 2025-11-26 12:38:33.57233124 +0000 UTC m=+0.093515464 container attach 5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb (image=quay.io/ceph/ceph:v18, name=relaxed_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 07:38:33 np0005536586 podman[94234]: 2025-11-26 12:38:33.495678381 +0000 UTC m=+0.016862605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:33 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v38: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:38:33 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 07:38:33 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3584447080' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 07:38:34 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 26 07:38:34 np0005536586 ceph-mon[74966]: log_channel(cluster) log [WRN] : Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 07:38:34 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3584447080' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 07:38:34 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Nov 26 07:38:34 np0005536586 relaxed_bell[94246]: pool 'backups' created
Nov 26 07:38:34 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Nov 26 07:38:34 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/3228469465' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 07:38:34 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/3584447080' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 07:38:34 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:34 np0005536586 systemd[1]: libpod-5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb.scope: Deactivated successfully.
Nov 26 07:38:34 np0005536586 conmon[94246]: conmon 5738eb540c01d983e7f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb.scope/container/memory.events
Nov 26 07:38:34 np0005536586 podman[94234]: 2025-11-26 12:38:34.241465902 +0000 UTC m=+0.762650096 container died 5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb (image=quay.io/ceph/ceph:v18, name=relaxed_bell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 07:38:34 np0005536586 systemd[1]: var-lib-containers-storage-overlay-8c1350c6b56a9c670f0f696969d57b53cae8b155b00a255e28606644f4d29abb-merged.mount: Deactivated successfully.
Nov 26 07:38:34 np0005536586 podman[94234]: 2025-11-26 12:38:34.262383327 +0000 UTC m=+0.783567532 container remove 5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb (image=quay.io/ceph/ceph:v18, name=relaxed_bell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 07:38:34 np0005536586 systemd[1]: libpod-conmon-5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb.scope: Deactivated successfully.
Nov 26 07:38:34 np0005536586 python3[94308]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:34 np0005536586 podman[94309]: 2025-11-26 12:38:34.507181159 +0000 UTC m=+0.027452171 container create 422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9 (image=quay.io/ceph/ceph:v18, name=infallible_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 07:38:34 np0005536586 systemd[1]: Started libpod-conmon-422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9.scope.
Nov 26 07:38:34 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:34 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a364fa04130773863a74d27dce78f5ff05de75179d9b620828b16fd5616dffc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:34 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a364fa04130773863a74d27dce78f5ff05de75179d9b620828b16fd5616dffc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:34 np0005536586 podman[94309]: 2025-11-26 12:38:34.554298787 +0000 UTC m=+0.074569799 container init 422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9 (image=quay.io/ceph/ceph:v18, name=infallible_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:34 np0005536586 podman[94309]: 2025-11-26 12:38:34.558377444 +0000 UTC m=+0.078648455 container start 422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9 (image=quay.io/ceph/ceph:v18, name=infallible_noyce, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:34 np0005536586 podman[94309]: 2025-11-26 12:38:34.559521207 +0000 UTC m=+0.079792219 container attach 422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9 (image=quay.io/ceph/ceph:v18, name=infallible_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:34 np0005536586 podman[94309]: 2025-11-26 12:38:34.49583386 +0000 UTC m=+0.016104872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:34 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:34 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 07:38:34 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1051002342' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 07:38:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 26 07:38:35 np0005536586 ceph-mon[74966]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 07:38:35 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/3584447080' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 07:38:35 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/1051002342' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 07:38:35 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1051002342' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 07:38:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 26 07:38:35 np0005536586 infallible_noyce[94322]: pool 'images' created
Nov 26 07:38:35 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 26 07:38:35 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:35 np0005536586 systemd[1]: libpod-422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9.scope: Deactivated successfully.
Nov 26 07:38:35 np0005536586 conmon[94322]: conmon 422a7128068f7b120455 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9.scope/container/memory.events
Nov 26 07:38:35 np0005536586 podman[94309]: 2025-11-26 12:38:35.253505165 +0000 UTC m=+0.773776177 container died 422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9 (image=quay.io/ceph/ceph:v18, name=infallible_noyce, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:35 np0005536586 systemd[1]: var-lib-containers-storage-overlay-2a364fa04130773863a74d27dce78f5ff05de75179d9b620828b16fd5616dffc-merged.mount: Deactivated successfully.
Nov 26 07:38:35 np0005536586 podman[94309]: 2025-11-26 12:38:35.274062309 +0000 UTC m=+0.794333321 container remove 422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9 (image=quay.io/ceph/ceph:v18, name=infallible_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:35 np0005536586 systemd[1]: libpod-conmon-422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9.scope: Deactivated successfully.
Nov 26 07:38:35 np0005536586 python3[94384]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:35 np0005536586 podman[94385]: 2025-11-26 12:38:35.522033785 +0000 UTC m=+0.026447460 container create bb2acb31044c9021199e1586ed8983900c856ceac3f35f9203c02a50423f9001 (image=quay.io/ceph/ceph:v18, name=mystifying_buck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:35 np0005536586 systemd[1]: Started libpod-conmon-bb2acb31044c9021199e1586ed8983900c856ceac3f35f9203c02a50423f9001.scope.
Nov 26 07:38:35 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:35 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2565d444ecd1d3a88580370fb35cfee5afce4f7a270d1582269903b66e631e5f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:35 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2565d444ecd1d3a88580370fb35cfee5afce4f7a270d1582269903b66e631e5f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:35 np0005536586 podman[94385]: 2025-11-26 12:38:35.580641542 +0000 UTC m=+0.085055227 container init bb2acb31044c9021199e1586ed8983900c856ceac3f35f9203c02a50423f9001 (image=quay.io/ceph/ceph:v18, name=mystifying_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 07:38:35 np0005536586 podman[94385]: 2025-11-26 12:38:35.585018943 +0000 UTC m=+0.089432629 container start bb2acb31044c9021199e1586ed8983900c856ceac3f35f9203c02a50423f9001 (image=quay.io/ceph/ceph:v18, name=mystifying_buck, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 07:38:35 np0005536586 podman[94385]: 2025-11-26 12:38:35.586210559 +0000 UTC m=+0.090624233 container attach bb2acb31044c9021199e1586ed8983900c856ceac3f35f9203c02a50423f9001 (image=quay.io/ceph/ceph:v18, name=mystifying_buck, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:35 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:35 np0005536586 podman[94385]: 2025-11-26 12:38:35.51181459 +0000 UTC m=+0.016228285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v41: 5 pgs: 2 unknown, 3 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:38:35
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Some PGs (0.400000) are unknown; try again later
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 07:38:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 07:38:35 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:38:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:38:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:38:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 07:38:36 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/492052497' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 07:38:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 26 07:38:36 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 26 07:38:36 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/492052497' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 07:38:36 np0005536586 mystifying_buck[94397]: pool 'cephfs.cephfs.meta' created
Nov 26 07:38:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 26 07:38:36 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 26 07:38:36 np0005536586 ceph-mgr[75236]: [progress INFO root] update: starting ev a5a5e78d-23d6-4243-a80d-24d48f919f2e (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 26 07:38:36 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/1051002342' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 07:38:36 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 07:38:36 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/492052497' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 07:38:36 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 07:38:36 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 07:38:36 np0005536586 systemd[1]: libpod-bb2acb31044c9021199e1586ed8983900c856ceac3f35f9203c02a50423f9001.scope: Deactivated successfully.
Nov 26 07:38:36 np0005536586 podman[94385]: 2025-11-26 12:38:36.254005005 +0000 UTC m=+0.758418680 container died bb2acb31044c9021199e1586ed8983900c856ceac3f35f9203c02a50423f9001 (image=quay.io/ceph/ceph:v18, name=mystifying_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 07:38:36 np0005536586 systemd[1]: var-lib-containers-storage-overlay-2565d444ecd1d3a88580370fb35cfee5afce4f7a270d1582269903b66e631e5f-merged.mount: Deactivated successfully.
Nov 26 07:38:36 np0005536586 podman[94385]: 2025-11-26 12:38:36.275824768 +0000 UTC m=+0.780238443 container remove bb2acb31044c9021199e1586ed8983900c856ceac3f35f9203c02a50423f9001 (image=quay.io/ceph/ceph:v18, name=mystifying_buck, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 07:38:36 np0005536586 systemd[1]: libpod-conmon-bb2acb31044c9021199e1586ed8983900c856ceac3f35f9203c02a50423f9001.scope: Deactivated successfully.
Nov 26 07:38:36 np0005536586 python3[94458]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:36 np0005536586 podman[94459]: 2025-11-26 12:38:36.517953639 +0000 UTC m=+0.024851340 container create ba8eedabb5028ea407c5606984ff3d73f6bcd6d8f357c45bea729b89a883ffa9 (image=quay.io/ceph/ceph:v18, name=brave_austin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:36 np0005536586 systemd[1]: Started libpod-conmon-ba8eedabb5028ea407c5606984ff3d73f6bcd6d8f357c45bea729b89a883ffa9.scope.
Nov 26 07:38:36 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:36 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a52ca279ea5c0f8f3c2e15995eceff44f648a94228260d776fde2bff9c1f659/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:36 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a52ca279ea5c0f8f3c2e15995eceff44f648a94228260d776fde2bff9c1f659/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:36 np0005536586 podman[94459]: 2025-11-26 12:38:36.559017743 +0000 UTC m=+0.065915464 container init ba8eedabb5028ea407c5606984ff3d73f6bcd6d8f357c45bea729b89a883ffa9 (image=quay.io/ceph/ceph:v18, name=brave_austin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 07:38:36 np0005536586 podman[94459]: 2025-11-26 12:38:36.562633193 +0000 UTC m=+0.069530894 container start ba8eedabb5028ea407c5606984ff3d73f6bcd6d8f357c45bea729b89a883ffa9 (image=quay.io/ceph/ceph:v18, name=brave_austin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 26 07:38:36 np0005536586 podman[94459]: 2025-11-26 12:38:36.563561811 +0000 UTC m=+0.070459532 container attach ba8eedabb5028ea407c5606984ff3d73f6bcd6d8f357c45bea729b89a883ffa9 (image=quay.io/ceph/ceph:v18, name=brave_austin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:36 np0005536586 podman[94459]: 2025-11-26 12:38:36.508227828 +0000 UTC m=+0.015125549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:36 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 07:38:36 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1590599154' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 07:38:37 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 26 07:38:37 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 26 07:38:37 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/492052497' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 07:38:37 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 07:38:37 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/1590599154' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 07:38:37 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 26 07:38:37 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1590599154' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 07:38:37 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 26 07:38:37 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 26 07:38:37 np0005536586 brave_austin[94471]: pool 'cephfs.cephfs.data' created
Nov 26 07:38:37 np0005536586 ceph-mgr[75236]: [progress INFO root] update: starting ev 095fcc50-4d3c-478f-90e7-89107ae53431 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 26 07:38:37 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 07:38:37 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 07:38:37 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:37 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:37 np0005536586 systemd[1]: libpod-ba8eedabb5028ea407c5606984ff3d73f6bcd6d8f357c45bea729b89a883ffa9.scope: Deactivated successfully.
Nov 26 07:38:37 np0005536586 podman[94459]: 2025-11-26 12:38:37.264421747 +0000 UTC m=+0.771319448 container died ba8eedabb5028ea407c5606984ff3d73f6bcd6d8f357c45bea729b89a883ffa9 (image=quay.io/ceph/ceph:v18, name=brave_austin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 07:38:37 np0005536586 systemd[1]: var-lib-containers-storage-overlay-9a52ca279ea5c0f8f3c2e15995eceff44f648a94228260d776fde2bff9c1f659-merged.mount: Deactivated successfully.
Nov 26 07:38:37 np0005536586 podman[94459]: 2025-11-26 12:38:37.285775178 +0000 UTC m=+0.792672879 container remove ba8eedabb5028ea407c5606984ff3d73f6bcd6d8f357c45bea729b89a883ffa9 (image=quay.io/ceph/ceph:v18, name=brave_austin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 26 07:38:37 np0005536586 systemd[1]: libpod-conmon-ba8eedabb5028ea407c5606984ff3d73f6bcd6d8f357c45bea729b89a883ffa9.scope: Deactivated successfully.
Nov 26 07:38:37 np0005536586 python3[94534]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:37 np0005536586 podman[94535]: 2025-11-26 12:38:37.552936872 +0000 UTC m=+0.027865382 container create 3efcec20678ef11b3d87c1616d63eb9ef955d9d4dfbc240d98b82d3c3f8cbfd8 (image=quay.io/ceph/ceph:v18, name=adoring_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 07:38:37 np0005536586 systemd[1]: Started libpod-conmon-3efcec20678ef11b3d87c1616d63eb9ef955d9d4dfbc240d98b82d3c3f8cbfd8.scope.
Nov 26 07:38:37 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:37 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e7db3b8ff802abb76c1e3d351239e61914f48de42a61a1324a459802062367/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:37 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e7db3b8ff802abb76c1e3d351239e61914f48de42a61a1324a459802062367/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:37 np0005536586 podman[94535]: 2025-11-26 12:38:37.589309149 +0000 UTC m=+0.064237669 container init 3efcec20678ef11b3d87c1616d63eb9ef955d9d4dfbc240d98b82d3c3f8cbfd8 (image=quay.io/ceph/ceph:v18, name=adoring_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:37 np0005536586 podman[94535]: 2025-11-26 12:38:37.593450125 +0000 UTC m=+0.068378634 container start 3efcec20678ef11b3d87c1616d63eb9ef955d9d4dfbc240d98b82d3c3f8cbfd8 (image=quay.io/ceph/ceph:v18, name=adoring_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 07:38:37 np0005536586 podman[94535]: 2025-11-26 12:38:37.594789428 +0000 UTC m=+0.069717958 container attach 3efcec20678ef11b3d87c1616d63eb9ef955d9d4dfbc240d98b82d3c3f8cbfd8 (image=quay.io/ceph/ceph:v18, name=adoring_haibt, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 07:38:37 np0005536586 podman[94535]: 2025-11-26 12:38:37.541887146 +0000 UTC m=+0.016815676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:37 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v44: 7 pgs: 3 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:38:37 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 07:38:37 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 07:38:37 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 07:38:37 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 07:38:38 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 26 07:38:38 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3930302744' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 26 07:38:38 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 26 07:38:38 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 26 07:38:38 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 07:38:38 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 07:38:38 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3930302744' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 26 07:38:38 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 26 07:38:38 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 23 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=23 pruub=10.971715927s) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active pruub 25.439893723s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:38 np0005536586 adoring_haibt[94547]: enabled application 'rbd' on pool 'vms'
Nov 26 07:38:38 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 26 07:38:38 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 23 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=23 pruub=10.971715927s) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown pruub 25.439893723s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:38 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 23 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=23 pruub=11.972233772s) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active pruub 30.163213730s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:38 np0005536586 ceph-mgr[75236]: [progress INFO root] update: starting ev 0b6812b7-a6f8-4a62-8625-03f8393508e0 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 26 07:38:38 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 23 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=23 pruub=11.972233772s) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown pruub 30.163213730s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:38 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 07:38:38 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 07:38:38 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 26 07:38:38 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/1590599154' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 07:38:38 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 07:38:38 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 07:38:38 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 07:38:38 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/3930302744' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 26 07:38:38 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:38 np0005536586 systemd[1]: libpod-3efcec20678ef11b3d87c1616d63eb9ef955d9d4dfbc240d98b82d3c3f8cbfd8.scope: Deactivated successfully.
Nov 26 07:38:38 np0005536586 podman[94572]: 2025-11-26 12:38:38.302133864 +0000 UTC m=+0.016036563 container died 3efcec20678ef11b3d87c1616d63eb9ef955d9d4dfbc240d98b82d3c3f8cbfd8 (image=quay.io/ceph/ceph:v18, name=adoring_haibt, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:38 np0005536586 systemd[1]: var-lib-containers-storage-overlay-01e7db3b8ff802abb76c1e3d351239e61914f48de42a61a1324a459802062367-merged.mount: Deactivated successfully.
Nov 26 07:38:38 np0005536586 podman[94572]: 2025-11-26 12:38:38.320279816 +0000 UTC m=+0.034182495 container remove 3efcec20678ef11b3d87c1616d63eb9ef955d9d4dfbc240d98b82d3c3f8cbfd8 (image=quay.io/ceph/ceph:v18, name=adoring_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 26 07:38:38 np0005536586 systemd[1]: libpod-conmon-3efcec20678ef11b3d87c1616d63eb9ef955d9d4dfbc240d98b82d3c3f8cbfd8.scope: Deactivated successfully.
Nov 26 07:38:38 np0005536586 python3[94608]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:38 np0005536586 podman[94609]: 2025-11-26 12:38:38.565732306 +0000 UTC m=+0.026441910 container create 8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2 (image=quay.io/ceph/ceph:v18, name=sweet_greider, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:38 np0005536586 systemd[1]: Started libpod-conmon-8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2.scope.
Nov 26 07:38:38 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:38 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00826225623a65c0a2e303c6fd53379c8e948f9ad9e8769c9d8d2d8a0df0af58/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:38 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00826225623a65c0a2e303c6fd53379c8e948f9ad9e8769c9d8d2d8a0df0af58/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:38 np0005536586 podman[94609]: 2025-11-26 12:38:38.611699496 +0000 UTC m=+0.072409100 container init 8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2 (image=quay.io/ceph/ceph:v18, name=sweet_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 07:38:38 np0005536586 podman[94609]: 2025-11-26 12:38:38.61571337 +0000 UTC m=+0.076422974 container start 8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2 (image=quay.io/ceph/ceph:v18, name=sweet_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 07:38:38 np0005536586 podman[94609]: 2025-11-26 12:38:38.616817369 +0000 UTC m=+0.077526974 container attach 8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2 (image=quay.io/ceph/ceph:v18, name=sweet_greider, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 07:38:38 np0005536586 podman[94609]: 2025-11-26 12:38:38.555294757 +0000 UTC m=+0.016004381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/568858784' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/568858784' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 26 07:38:39 np0005536586 sweet_greider[94621]: enabled application 'rbd' on pool 'volumes'
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 26 07:38:39 np0005536586 ceph-mgr[75236]: [progress INFO root] update: starting ev 3b18ecbb-6643-45ed-9c0d-a4c4775f6645 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 26 07:38:39 np0005536586 ceph-mgr[75236]: [progress INFO root] complete: finished ev a5a5e78d-23d6-4243-a80d-24d48f919f2e (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 26 07:38:39 np0005536586 ceph-mgr[75236]: [progress INFO root] Completed event a5a5e78d-23d6-4243-a80d-24d48f919f2e (PG autoscaler increasing pool 2 PGs from 1 to 32) in 3 seconds
Nov 26 07:38:39 np0005536586 ceph-mgr[75236]: [progress INFO root] complete: finished ev 095fcc50-4d3c-478f-90e7-89107ae53431 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 26 07:38:39 np0005536586 ceph-mgr[75236]: [progress INFO root] Completed event 095fcc50-4d3c-478f-90e7-89107ae53431 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 2 seconds
Nov 26 07:38:39 np0005536586 ceph-mgr[75236]: [progress INFO root] complete: finished ev 0b6812b7-a6f8-4a62-8625-03f8393508e0 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 26 07:38:39 np0005536586 ceph-mgr[75236]: [progress INFO root] Completed event 0b6812b7-a6f8-4a62-8625-03f8393508e0 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 1 seconds
Nov 26 07:38:39 np0005536586 ceph-mgr[75236]: [progress INFO root] complete: finished ev 3b18ecbb-6643-45ed-9c0d-a4c4775f6645 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 26 07:38:39 np0005536586 ceph-mgr[75236]: [progress INFO root] Completed event 3b18ecbb-6643-45ed-9c0d-a4c4775f6645 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 0 seconds
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1d( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1c( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.b( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.a( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.9( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.8( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.6( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.5( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.4( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.3( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.2( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1f( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.7( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.c( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.d( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.e( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.f( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.10( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.11( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.12( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.13( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1e( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1f( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1d( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1c( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1b( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.a( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.9( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.8( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.7( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.6( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.5( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.3( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.4( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.2( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.b( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.c( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.d( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.e( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.f( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.11( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.13( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.14( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.15( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.16( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.17( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.18( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.19( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1a( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.10( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.12( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.14( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.15( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.16( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.17( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.18( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.19( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1a( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1b( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1e( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.8( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/3930302744' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.a( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.9( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.7( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.6( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.5( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.3( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.0( empty local-lis/les=23/24 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.2( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.8( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.4( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/568858784' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/568858784' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.3( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.2( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.7( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.0( empty local-lis/les=23/24 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.e( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.11( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.14( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.16( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.11( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.14( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.16( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.17( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.13( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.18( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.15( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1a( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.10( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.12( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.17( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1a( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.19( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1e( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:39 np0005536586 systemd[1]: libpod-8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2.scope: Deactivated successfully.
Nov 26 07:38:39 np0005536586 conmon[94621]: conmon 8cfdd0747598d806f5b3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2.scope/container/memory.events
Nov 26 07:38:39 np0005536586 podman[94646]: 2025-11-26 12:38:39.300793021 +0000 UTC m=+0.015802028 container died 8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2 (image=quay.io/ceph/ceph:v18, name=sweet_greider, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 07:38:39 np0005536586 systemd[1]: var-lib-containers-storage-overlay-00826225623a65c0a2e303c6fd53379c8e948f9ad9e8769c9d8d2d8a0df0af58-merged.mount: Deactivated successfully.
Nov 26 07:38:39 np0005536586 podman[94646]: 2025-11-26 12:38:39.323368745 +0000 UTC m=+0.038377732 container remove 8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2 (image=quay.io/ceph/ceph:v18, name=sweet_greider, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 07:38:39 np0005536586 systemd[1]: libpod-conmon-8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2.scope: Deactivated successfully.
Nov 26 07:38:39 np0005536586 python3[94683]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:39 np0005536586 podman[94684]: 2025-11-26 12:38:39.58974941 +0000 UTC m=+0.027151652 container create 6c2809d9c6a5528cd501031864a35930735688c719acd5deb48da19ebfc7b769 (image=quay.io/ceph/ceph:v18, name=vigorous_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:39 np0005536586 systemd[1]: Started libpod-conmon-6c2809d9c6a5528cd501031864a35930735688c719acd5deb48da19ebfc7b769.scope.
Nov 26 07:38:39 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:39 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65589fb9917e834e7b0d837d487335992727f67616120bc82ca66320e774cd45/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:39 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65589fb9917e834e7b0d837d487335992727f67616120bc82ca66320e774cd45/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:39 np0005536586 podman[94684]: 2025-11-26 12:38:39.640159708 +0000 UTC m=+0.077561960 container init 6c2809d9c6a5528cd501031864a35930735688c719acd5deb48da19ebfc7b769 (image=quay.io/ceph/ceph:v18, name=vigorous_hellman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:39 np0005536586 podman[94684]: 2025-11-26 12:38:39.644697784 +0000 UTC m=+0.082100026 container start 6c2809d9c6a5528cd501031864a35930735688c719acd5deb48da19ebfc7b769 (image=quay.io/ceph/ceph:v18, name=vigorous_hellman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:39 np0005536586 podman[94684]: 2025-11-26 12:38:39.645724797 +0000 UTC m=+0.083127059 container attach 6c2809d9c6a5528cd501031864a35930735688c719acd5deb48da19ebfc7b769 (image=quay.io/ceph/ceph:v18, name=vigorous_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 07:38:39 np0005536586 podman[94684]: 2025-11-26 12:38:39.578549981 +0000 UTC m=+0.015952224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 26 07:38:39 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 26 07:38:39 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v47: 69 pgs: 1 peering, 32 unknown, 36 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 07:38:39 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 07:38:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 26 07:38:40 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/941757284' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 26 07:38:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 26 07:38:40 np0005536586 ceph-mon[74966]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 07:38:40 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 07:38:40 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 07:38:40 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/941757284' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 26 07:38:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 26 07:38:40 np0005536586 vigorous_hellman[94697]: enabled application 'rbd' on pool 'backups'
Nov 26 07:38:40 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 26 07:38:40 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 25 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=25 pruub=10.968404770s) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active pruub 34.490970612s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:40 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 07:38:40 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 07:38:40 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/941757284' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 26 07:38:40 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 25 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=25 pruub=10.968404770s) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown pruub 34.490970612s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:40 np0005536586 systemd[1]: libpod-6c2809d9c6a5528cd501031864a35930735688c719acd5deb48da19ebfc7b769.scope: Deactivated successfully.
Nov 26 07:38:40 np0005536586 podman[94684]: 2025-11-26 12:38:40.285328647 +0000 UTC m=+0.722730889 container died 6c2809d9c6a5528cd501031864a35930735688c719acd5deb48da19ebfc7b769 (image=quay.io/ceph/ceph:v18, name=vigorous_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 07:38:40 np0005536586 systemd[1]: var-lib-containers-storage-overlay-65589fb9917e834e7b0d837d487335992727f67616120bc82ca66320e774cd45-merged.mount: Deactivated successfully.
Nov 26 07:38:40 np0005536586 podman[94684]: 2025-11-26 12:38:40.306596696 +0000 UTC m=+0.743998938 container remove 6c2809d9c6a5528cd501031864a35930735688c719acd5deb48da19ebfc7b769 (image=quay.io/ceph/ceph:v18, name=vigorous_hellman, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:40 np0005536586 systemd[1]: libpod-conmon-6c2809d9c6a5528cd501031864a35930735688c719acd5deb48da19ebfc7b769.scope: Deactivated successfully.
Nov 26 07:38:40 np0005536586 python3[94756]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:40 np0005536586 podman[94757]: 2025-11-26 12:38:40.549421013 +0000 UTC m=+0.028324111 container create a3ddcf033281a723f129ec889417ac4bba1bf944e8d21dc1037e703bc67dee38 (image=quay.io/ceph/ceph:v18, name=thirsty_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:40 np0005536586 systemd[1]: Started libpod-conmon-a3ddcf033281a723f129ec889417ac4bba1bf944e8d21dc1037e703bc67dee38.scope.
Nov 26 07:38:40 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:40 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64a4ff336334a11163669435b58f8cea2335f903761bd045f11f568341658f58/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:40 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64a4ff336334a11163669435b58f8cea2335f903761bd045f11f568341658f58/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:40 np0005536586 podman[94757]: 2025-11-26 12:38:40.605527578 +0000 UTC m=+0.084430695 container init a3ddcf033281a723f129ec889417ac4bba1bf944e8d21dc1037e703bc67dee38 (image=quay.io/ceph/ceph:v18, name=thirsty_margulis, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:40 np0005536586 podman[94757]: 2025-11-26 12:38:40.609447485 +0000 UTC m=+0.088350582 container start a3ddcf033281a723f129ec889417ac4bba1bf944e8d21dc1037e703bc67dee38 (image=quay.io/ceph/ceph:v18, name=thirsty_margulis, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 07:38:40 np0005536586 podman[94757]: 2025-11-26 12:38:40.611946252 +0000 UTC m=+0.090849349 container attach a3ddcf033281a723f129ec889417ac4bba1bf944e8d21dc1037e703bc67dee38 (image=quay.io/ceph/ceph:v18, name=thirsty_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 07:38:40 np0005536586 podman[94757]: 2025-11-26 12:38:40.53765964 +0000 UTC m=+0.016562747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:40 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=25 pruub=11.576715469s) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active pruub 28.454172134s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:40 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=25 pruub=11.576715469s) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown pruub 28.454172134s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:40 np0005536586 ceph-mgr[75236]: [progress INFO root] Writing back 7 completed events
Nov 26 07:38:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 07:38:40 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:38:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 26 07:38:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/776129414' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 26 07:38:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 26 07:38:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/776129414' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 26 07:38:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 26 07:38:41 np0005536586 thirsty_margulis[94769]: enabled application 'rbd' on pool 'images'
Nov 26 07:38:41 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1f( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1e( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1d( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1c( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.7( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.b( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.6( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1b( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.a( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.5( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1a( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.9( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.4( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.19( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.3( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.2( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.c( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.d( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.e( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.8( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.f( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.10( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.11( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.12( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.13( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.14( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.15( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.16( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.17( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.18( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1d( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1e( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-mon[74966]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1f( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.10( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 07:38:41 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/941757284' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.11( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.12( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/776129414' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.13( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.14( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.15( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.16( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.17( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.8( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.9( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.a( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.b( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.7( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.6( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1c( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.5( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.4( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.3( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.2( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.f( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.e( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.d( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.c( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1b( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1a( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.19( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.18( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1f( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1e( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1d( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1c( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.7( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.b( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.6( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1b( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.a( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.5( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1a( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1d( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1e( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1f( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.11( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.12( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.13( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.14( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.15( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.16( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.10( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.8( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.9( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.a( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.17( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.b( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.7( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.0( empty local-lis/les=25/26 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.6( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1c( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.5( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.4( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.3( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.f( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.e( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.d( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.c( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1b( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1a( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.19( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.18( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.2( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 systemd[1]: libpod-a3ddcf033281a723f129ec889417ac4bba1bf944e8d21dc1037e703bc67dee38.scope: Deactivated successfully.
Nov 26 07:38:41 np0005536586 podman[94757]: 2025-11-26 12:38:41.292749692 +0000 UTC m=+0.771652800 container died a3ddcf033281a723f129ec889417ac4bba1bf944e8d21dc1037e703bc67dee38 (image=quay.io/ceph/ceph:v18, name=thirsty_margulis, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.9( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.4( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.19( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.3( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.2( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.0( empty local-lis/les=25/26 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.c( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.d( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.e( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.f( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.8( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.10( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.11( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.12( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.13( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.14( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.16( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.15( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.17( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.18( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:41 np0005536586 systemd[1]: var-lib-containers-storage-overlay-64a4ff336334a11163669435b58f8cea2335f903761bd045f11f568341658f58-merged.mount: Deactivated successfully.
Nov 26 07:38:41 np0005536586 podman[94757]: 2025-11-26 12:38:41.31394842 +0000 UTC m=+0.792851517 container remove a3ddcf033281a723f129ec889417ac4bba1bf944e8d21dc1037e703bc67dee38 (image=quay.io/ceph/ceph:v18, name=thirsty_margulis, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 07:38:41 np0005536586 systemd[1]: libpod-conmon-a3ddcf033281a723f129ec889417ac4bba1bf944e8d21dc1037e703bc67dee38.scope: Deactivated successfully.
Nov 26 07:38:41 np0005536586 python3[94831]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:41 np0005536586 podman[94832]: 2025-11-26 12:38:41.559415808 +0000 UTC m=+0.028118571 container create 061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f (image=quay.io/ceph/ceph:v18, name=cool_archimedes, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:41 np0005536586 systemd[1]: Started libpod-conmon-061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f.scope.
Nov 26 07:38:41 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:41 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf147d2ef535ade53e6020421821c69f2532a6690f362eabef84a5c356128656/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:41 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf147d2ef535ade53e6020421821c69f2532a6690f362eabef84a5c356128656/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:41 np0005536586 podman[94832]: 2025-11-26 12:38:41.610347311 +0000 UTC m=+0.079050074 container init 061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f (image=quay.io/ceph/ceph:v18, name=cool_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 07:38:41 np0005536586 podman[94832]: 2025-11-26 12:38:41.615006356 +0000 UTC m=+0.083709119 container start 061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f (image=quay.io/ceph/ceph:v18, name=cool_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:41 np0005536586 podman[94832]: 2025-11-26 12:38:41.61619764 +0000 UTC m=+0.084900403 container attach 061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f (image=quay.io/ceph/ceph:v18, name=cool_archimedes, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 07:38:41 np0005536586 podman[94832]: 2025-11-26 12:38:41.547971335 +0000 UTC m=+0.016674128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:41 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v50: 131 pgs: 1 peering, 62 unknown, 68 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:38:42 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 26 07:38:42 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1601133733' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 26 07:38:42 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 26 07:38:42 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1601133733' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 26 07:38:42 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 26 07:38:42 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/776129414' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 26 07:38:42 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/1601133733' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 26 07:38:42 np0005536586 cool_archimedes[94844]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 26 07:38:42 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 26 07:38:42 np0005536586 systemd[1]: libpod-061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f.scope: Deactivated successfully.
Nov 26 07:38:42 np0005536586 conmon[94844]: conmon 061d1dda6ea6cbd95273 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f.scope/container/memory.events
Nov 26 07:38:42 np0005536586 podman[94870]: 2025-11-26 12:38:42.335328054 +0000 UTC m=+0.015425646 container died 061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f (image=quay.io/ceph/ceph:v18, name=cool_archimedes, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 07:38:42 np0005536586 systemd[1]: var-lib-containers-storage-overlay-cf147d2ef535ade53e6020421821c69f2532a6690f362eabef84a5c356128656-merged.mount: Deactivated successfully.
Nov 26 07:38:42 np0005536586 podman[94870]: 2025-11-26 12:38:42.356513628 +0000 UTC m=+0.036611219 container remove 061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f (image=quay.io/ceph/ceph:v18, name=cool_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:42 np0005536586 systemd[1]: libpod-conmon-061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f.scope: Deactivated successfully.
Nov 26 07:38:42 np0005536586 python3[94907]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:42 np0005536586 podman[94908]: 2025-11-26 12:38:42.608417343 +0000 UTC m=+0.026543020 container create d3c3f78321190430344a21a8072def2df5c8351bb7ee18e6abc6b91487e71a3e (image=quay.io/ceph/ceph:v18, name=magical_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 26 07:38:42 np0005536586 systemd[1]: Started libpod-conmon-d3c3f78321190430344a21a8072def2df5c8351bb7ee18e6abc6b91487e71a3e.scope.
Nov 26 07:38:42 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:42 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc2362f292d6ce376e9400d508dc54eaec3fffc6134a3617c68bec85f13aae8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:42 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc2362f292d6ce376e9400d508dc54eaec3fffc6134a3617c68bec85f13aae8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:42 np0005536586 podman[94908]: 2025-11-26 12:38:42.663164897 +0000 UTC m=+0.081290594 container init d3c3f78321190430344a21a8072def2df5c8351bb7ee18e6abc6b91487e71a3e (image=quay.io/ceph/ceph:v18, name=magical_moore, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 07:38:42 np0005536586 podman[94908]: 2025-11-26 12:38:42.667008919 +0000 UTC m=+0.085134596 container start d3c3f78321190430344a21a8072def2df5c8351bb7ee18e6abc6b91487e71a3e (image=quay.io/ceph/ceph:v18, name=magical_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:42 np0005536586 podman[94908]: 2025-11-26 12:38:42.668128908 +0000 UTC m=+0.086254586 container attach d3c3f78321190430344a21a8072def2df5c8351bb7ee18e6abc6b91487e71a3e (image=quay.io/ceph/ceph:v18, name=magical_moore, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:42 np0005536586 podman[94908]: 2025-11-26 12:38:42.598101044 +0000 UTC m=+0.016226742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:42 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 26 07:38:42 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 26 07:38:43 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 26 07:38:43 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2492885918' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 26 07:38:43 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 26 07:38:43 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/1601133733' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 26 07:38:43 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/2492885918' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 26 07:38:43 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2492885918' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 26 07:38:43 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 26 07:38:43 np0005536586 magical_moore[94920]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 26 07:38:43 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.1 deep-scrub starts
Nov 26 07:38:43 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 26 07:38:43 np0005536586 systemd[1]: libpod-d3c3f78321190430344a21a8072def2df5c8351bb7ee18e6abc6b91487e71a3e.scope: Deactivated successfully.
Nov 26 07:38:43 np0005536586 podman[94908]: 2025-11-26 12:38:43.312152361 +0000 UTC m=+0.730278058 container died d3c3f78321190430344a21a8072def2df5c8351bb7ee18e6abc6b91487e71a3e (image=quay.io/ceph/ceph:v18, name=magical_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:43 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.1 deep-scrub ok
Nov 26 07:38:43 np0005536586 systemd[1]: var-lib-containers-storage-overlay-adc2362f292d6ce376e9400d508dc54eaec3fffc6134a3617c68bec85f13aae8-merged.mount: Deactivated successfully.
Nov 26 07:38:43 np0005536586 podman[94908]: 2025-11-26 12:38:43.333352331 +0000 UTC m=+0.751478008 container remove d3c3f78321190430344a21a8072def2df5c8351bb7ee18e6abc6b91487e71a3e (image=quay.io/ceph/ceph:v18, name=magical_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 07:38:43 np0005536586 systemd[1]: libpod-conmon-d3c3f78321190430344a21a8072def2df5c8351bb7ee18e6abc6b91487e71a3e.scope: Deactivated successfully.
Nov 26 07:38:43 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 26 07:38:43 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 26 07:38:43 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v53: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:38:43 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 07:38:43 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 07:38:43 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 07:38:43 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 07:38:43 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 07:38:43 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 07:38:43 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 07:38:43 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 07:38:43 np0005536586 python3[95030]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:38:44 np0005536586 python3[95101]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764160723.7980077-37044-39369632318689/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:38:44 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 26 07:38:44 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Nov 26 07:38:44 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 26 07:38:44 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 07:38:44 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 07:38:44 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 07:38:44 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 07:38:44 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 26 07:38:44 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.1e( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988976479s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505214691s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.966246605s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482488632s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.966189384s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482465744s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.966205597s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482488632s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.966147423s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482465744s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.966093063s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482452393s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.1d( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988794327s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505176544s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.1d( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988755226s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505176544s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.966007233s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482452393s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.16( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965889931s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482442856s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.16( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965874672s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482442856s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.11( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988616943s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505237579s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.11( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988597870s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505237579s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.11( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.1e( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988555908s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505214691s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.17( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965748787s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482437134s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.13( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965732574s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482437134s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.12( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988538742s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505271912s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.12( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988524437s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505271912s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.12( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.13( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988522530s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505287170s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.16( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.13( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988505363s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505287170s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965608597s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482414246s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965594292s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482414246s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.14( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988453865s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505287170s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.15( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988427162s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505290985s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.14( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988440514s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505287170s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.15( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988411903s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505290985s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.16( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988368988s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505310059s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.16( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988312721s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505310059s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.17( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965442657s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482463837s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.9( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988078117s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505344391s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.17( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965351105s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482463837s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.11( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965111732s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482398987s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.9( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.3( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.5( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.9( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988045692s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505344391s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.11( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965085030s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482398987s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.964999199s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482330322s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.7( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.964978218s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482330322s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.7( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987951279s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505378723s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.7( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987937927s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505378723s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.2( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.961322784s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478792191s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.2( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.961305618s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478792191s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.5( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987884521s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505397797s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.1( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.3( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.961241722s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478790283s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.5( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987843513s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505397797s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.3( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.961225510s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478790283s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/2492885918' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.4( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987816811s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505416870s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 07:38:44 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 07:38:44 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.f( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.c( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.4( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987801552s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505416870s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.1d( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.7( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.964609146s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482233047s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.7( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.964589119s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482233047s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.961126328s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478786469s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.1a( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.961111069s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478786469s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.964792252s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482503891s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.18( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.3( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987697601s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505424500s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.964778900s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482503891s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.3( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987675667s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505424500s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.19( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.2( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987970352s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505790710s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.2( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987954140s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505790710s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960905075s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478773117s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.1( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987540245s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505432129s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960890770s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478773117s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955610275s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.199962616s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955582619s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.199962616s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.1( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987526894s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505432129s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955240250s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.199695587s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955218315s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.199695587s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955148697s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.199703217s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955130577s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.199703217s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.8( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960819244s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478759766s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955301285s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.199954987s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955283165s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.199954987s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.f( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987499237s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505443573s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955018044s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.199752808s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955002785s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.199752808s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960766792s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478731155s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955121994s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.199932098s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955104828s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.199932098s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.f( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987483025s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505443573s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.8( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955128670s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.200031281s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.8( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955093384s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.200031281s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960752487s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478731155s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.7( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954947472s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.199966431s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.7( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954934120s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.199966431s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960717201s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478717804s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.6( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954885483s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.199993134s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.6( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954867363s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.199993134s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.8( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960759163s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478759766s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954810143s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.199996948s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954795837s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.199996948s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960703850s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478717804s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954736710s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.200008392s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954721451s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.200008392s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960652351s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478710175s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954665184s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.200012207s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954650879s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.200012207s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960639000s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478710175s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.c( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987387657s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505470276s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954570770s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.200042725s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954553604s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.200042725s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.c( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987366676s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505470276s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.958213806s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.203784943s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.958196640s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.203784943s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960472107s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478660583s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957991600s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.203651428s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957976341s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.203651428s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957974434s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.203727722s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957956314s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.203727722s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960059166s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478660583s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.12( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.958221436s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.204078674s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.12( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.958209991s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.204078674s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.1a( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.986842155s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505485535s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960030556s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478685379s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960004807s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478685379s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.958020210s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.203979492s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.958008766s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.203979492s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957818985s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.203834534s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.18( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957925797s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.203968048s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.17( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957794189s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.203834534s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957797050s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.203834534s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.17( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957764626s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.203834534s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.18( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957838058s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.203968048s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.963511467s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482202530s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.11( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.963496208s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482202530s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.19( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.986742973s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505485535s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.13( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.1a( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.986726761s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505485535s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.18( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.986721039s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505496979s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.19( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.986720085s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505485535s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.18( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.986707687s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505496979s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[5.14( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.963317871s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482355118s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.963282585s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482355118s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[5.15( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[3.1e( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[3.1d( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.16( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.8( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[3.8( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[3.7( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.b( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[5.3( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[5.2( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.1f( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[3.5( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[3.e( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[3.11( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[3.16( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[3.18( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.2( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[5.5( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.f( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.1c( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[5.4( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.1d( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[5.7( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.18( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.19( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[5.1e( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.1f( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.1b( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.8( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.977860451s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548255920s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.1c( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.973085403s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.543510437s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.8( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.977839470s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548255920s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.1c( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.973063469s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.543510437s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.7( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972986221s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.543521881s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.7( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972971916s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.543521881s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.1b( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972811699s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.543590546s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.5( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972809792s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.543621063s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.1b( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972784042s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.543590546s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.5( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972793579s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.543621063s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.a( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972681999s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.543605804s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.1a( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972705841s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.543632507s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.a( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972665787s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.543605804s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.1a( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972690582s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.543632507s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.4( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.977067947s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548099518s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.4( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.977055550s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548099518s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.1( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.977051735s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548171997s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.1( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.977040291s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548171997s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.2( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976967812s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548179626s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.2( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976955414s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548179626s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.9( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976746559s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548088074s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.9( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976715088s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548088074s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.e( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976774216s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548225403s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.e( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976761818s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548225403s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.d( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976682663s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548217773s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.d( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976669312s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548217773s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.10( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976590157s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548267365s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.f( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976564407s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548236847s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.10( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976575851s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548267365s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.f( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976531029s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548236847s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.11( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976515770s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548278809s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.11( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976505280s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548278809s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.12( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976527214s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548336029s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.12( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976513863s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548336029s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.13( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976540565s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548381805s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.13( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976528168s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548381805s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.18( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976535797s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548473358s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.18( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976524353s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548473358s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.6( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.1( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.14( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.975943565s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548412323s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.14( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.975918770s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548412323s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.12( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.7( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.17( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.8( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.5( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.4( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.2( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.9( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.d( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.10( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.f( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.12( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.14( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[4.1c( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[4.1b( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[4.a( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[4.1( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[4.e( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[4.11( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[4.13( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[4.18( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[4.1a( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 26 07:38:44 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 26 07:38:44 np0005536586 python3[95203]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:38:44 np0005536586 python3[95278]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764160724.4477854-37058-196574120893949/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=f59bc0653853925cdc06336edac42275833fbc2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:38:45 np0005536586 python3[95328]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:45 np0005536586 podman[95329]: 2025-11-26 12:38:45.226872608 +0000 UTC m=+0.028325623 container create 80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c (image=quay.io/ceph/ceph:v18, name=mystifying_archimedes, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:45 np0005536586 systemd[1]: Started libpod-conmon-80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c.scope.
Nov 26 07:38:45 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 26 07:38:45 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c32e4d56ee0e41c97efed3e6ec26ddfd1086a37315ef9460ba8c514950bc34d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:45 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c32e4d56ee0e41c97efed3e6ec26ddfd1086a37315ef9460ba8c514950bc34d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:45 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c32e4d56ee0e41c97efed3e6ec26ddfd1086a37315ef9460ba8c514950bc34d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:45 np0005536586 podman[95329]: 2025-11-26 12:38:45.283699957 +0000 UTC m=+0.085152972 container init 80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c (image=quay.io/ceph/ceph:v18, name=mystifying_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 07:38:45 np0005536586 podman[95329]: 2025-11-26 12:38:45.288841444 +0000 UTC m=+0.090294459 container start 80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c (image=quay.io/ceph/ceph:v18, name=mystifying_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 07:38:45 np0005536586 podman[95329]: 2025-11-26 12:38:45.290087512 +0000 UTC m=+0.091540526 container attach 80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c (image=quay.io/ceph/ceph:v18, name=mystifying_archimedes, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 26 07:38:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 26 07:38:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 26 07:38:45 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 26 07:38:45 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[4.1c( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[3.18( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[4.13( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[3.11( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[4.11( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[3.16( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[4.1( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[4.a( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[3.5( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[3.7( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[4.e( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[3.8( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[4.1a( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[3.1d( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[4.18( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[3.1e( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[4.1b( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[3.e( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-mon[74966]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Nov 26 07:38:45 np0005536586 ceph-mon[74966]: Cluster is now healthy
Nov 26 07:38:45 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 07:38:45 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 07:38:45 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 07:38:45 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 07:38:45 np0005536586 podman[95329]: 2025-11-26 12:38:45.215434817 +0000 UTC m=+0.016887853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.19( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.18( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.1a( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.f( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.1d( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.9( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[5.1e( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.19( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.c( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.4( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.5( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.a( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.9( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.3( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.16( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.d( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.15( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.12( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.13( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.17( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.11( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.1b( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.10( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.12( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.14( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.8( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.9( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.5( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.1( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.7( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.2( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.4( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.f( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.d( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.6( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.7( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.18( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.1d( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[5.7( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.1c( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.f( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[5.5( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.2( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.1f( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[5.2( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[5.3( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.b( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.16( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.8( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.13( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[5.4( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.11( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.17( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[5.15( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.12( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.15( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.1f( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.f( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[5.14( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.c( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.1( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.9( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.3( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.a( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.6( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.1b( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 26 07:38:45 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 26 07:38:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 26 07:38:45 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1241135943' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 07:38:45 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1241135943' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 26 07:38:45 np0005536586 mystifying_archimedes[95341]: 
Nov 26 07:38:45 np0005536586 mystifying_archimedes[95341]: [global]
Nov 26 07:38:45 np0005536586 mystifying_archimedes[95341]: #011fsid = f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 07:38:45 np0005536586 mystifying_archimedes[95341]: #011mon_host = 192.168.122.100
Nov 26 07:38:45 np0005536586 systemd[1]: libpod-80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c.scope: Deactivated successfully.
Nov 26 07:38:45 np0005536586 conmon[95341]: conmon 80cfd4b7c3f9b5966c8a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c.scope/container/memory.events
Nov 26 07:38:45 np0005536586 podman[95329]: 2025-11-26 12:38:45.739460612 +0000 UTC m=+0.540913626 container died 80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c (image=quay.io/ceph/ceph:v18, name=mystifying_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:45 np0005536586 systemd[1]: var-lib-containers-storage-overlay-6c32e4d56ee0e41c97efed3e6ec26ddfd1086a37315ef9460ba8c514950bc34d-merged.mount: Deactivated successfully.
Nov 26 07:38:45 np0005536586 podman[95329]: 2025-11-26 12:38:45.763725292 +0000 UTC m=+0.565178307 container remove 80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c (image=quay.io/ceph/ceph:v18, name=mystifying_archimedes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 07:38:45 np0005536586 systemd[1]: libpod-conmon-80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c.scope: Deactivated successfully.
Nov 26 07:38:45 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v56: 131 pgs: 34 peering, 97 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:38:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:38:45 np0005536586 python3[95480]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:46 np0005536586 podman[95502]: 2025-11-26 12:38:46.035354087 +0000 UTC m=+0.030662794 container create 7d889942ee03894b1830203467bc2fb992e3e9890c0f4f00cfa70a1c7d02517d (image=quay.io/ceph/ceph:v18, name=ecstatic_benz, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:46 np0005536586 systemd[1]: Started libpod-conmon-7d889942ee03894b1830203467bc2fb992e3e9890c0f4f00cfa70a1c7d02517d.scope.
Nov 26 07:38:46 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:46 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69f35def29e68c5c1978e0e8af78345226906e89e8cfb79334e0b19d05286d8d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:46 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69f35def29e68c5c1978e0e8af78345226906e89e8cfb79334e0b19d05286d8d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:46 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69f35def29e68c5c1978e0e8af78345226906e89e8cfb79334e0b19d05286d8d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:46 np0005536586 podman[95502]: 2025-11-26 12:38:46.092221571 +0000 UTC m=+0.087530289 container init 7d889942ee03894b1830203467bc2fb992e3e9890c0f4f00cfa70a1c7d02517d (image=quay.io/ceph/ceph:v18, name=ecstatic_benz, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:46 np0005536586 podman[95502]: 2025-11-26 12:38:46.097650933 +0000 UTC m=+0.092959642 container start 7d889942ee03894b1830203467bc2fb992e3e9890c0f4f00cfa70a1c7d02517d (image=quay.io/ceph/ceph:v18, name=ecstatic_benz, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:46 np0005536586 podman[95502]: 2025-11-26 12:38:46.100151945 +0000 UTC m=+0.095460653 container attach 7d889942ee03894b1830203467bc2fb992e3e9890c0f4f00cfa70a1c7d02517d (image=quay.io/ceph/ceph:v18, name=ecstatic_benz, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:46 np0005536586 podman[95502]: 2025-11-26 12:38:46.021916473 +0000 UTC m=+0.017225191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:46 np0005536586 podman[95575]: 2025-11-26 12:38:46.286520407 +0000 UTC m=+0.040482284 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/1241135943' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/1241135943' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 26 07:38:46 np0005536586 podman[95575]: 2025-11-26 12:38:46.367693688 +0000 UTC m=+0.121655555 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:46 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 26 07:38:46 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3736164057' entity='client.admin' 
Nov 26 07:38:46 np0005536586 ecstatic_benz[95532]: set ssl_option
Nov 26 07:38:46 np0005536586 systemd[1]: libpod-7d889942ee03894b1830203467bc2fb992e3e9890c0f4f00cfa70a1c7d02517d.scope: Deactivated successfully.
Nov 26 07:38:46 np0005536586 podman[95502]: 2025-11-26 12:38:46.618953695 +0000 UTC m=+0.614262414 container died 7d889942ee03894b1830203467bc2fb992e3e9890c0f4f00cfa70a1c7d02517d (image=quay.io/ceph/ceph:v18, name=ecstatic_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 07:38:46 np0005536586 systemd[1]: var-lib-containers-storage-overlay-69f35def29e68c5c1978e0e8af78345226906e89e8cfb79334e0b19d05286d8d-merged.mount: Deactivated successfully.
Nov 26 07:38:46 np0005536586 podman[95502]: 2025-11-26 12:38:46.641488201 +0000 UTC m=+0.636796910 container remove 7d889942ee03894b1830203467bc2fb992e3e9890c0f4f00cfa70a1c7d02517d (image=quay.io/ceph/ceph:v18, name=ecstatic_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:46 np0005536586 systemd[1]: libpod-conmon-7d889942ee03894b1830203467bc2fb992e3e9890c0f4f00cfa70a1c7d02517d.scope: Deactivated successfully.
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:46 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 65b822e2-2cc8-4854-aa1f-a77e3f35e3c2 does not exist
Nov 26 07:38:46 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 35931af2-2cae-4920-86f0-5af614a04373 does not exist
Nov 26 07:38:46 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 631a5cf6-93a7-4ed3-8901-d8979f803a63 does not exist
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:38:46 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:38:46 np0005536586 python3[95763]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:46 np0005536586 podman[95831]: 2025-11-26 12:38:46.920097694 +0000 UTC m=+0.027538303 container create bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020 (image=quay.io/ceph/ceph:v18, name=optimistic_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 07:38:46 np0005536586 systemd[1]: Started libpod-conmon-bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020.scope.
Nov 26 07:38:46 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:46 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e37c65679871074fd5ba7fa6c20af4bb15e102a99666cab40683b86549746126/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:46 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e37c65679871074fd5ba7fa6c20af4bb15e102a99666cab40683b86549746126/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:46 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e37c65679871074fd5ba7fa6c20af4bb15e102a99666cab40683b86549746126/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:46 np0005536586 podman[95831]: 2025-11-26 12:38:46.972107588 +0000 UTC m=+0.079548207 container init bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020 (image=quay.io/ceph/ceph:v18, name=optimistic_easley, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 07:38:46 np0005536586 podman[95831]: 2025-11-26 12:38:46.979538086 +0000 UTC m=+0.086978686 container start bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020 (image=quay.io/ceph/ceph:v18, name=optimistic_easley, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 07:38:46 np0005536586 podman[95831]: 2025-11-26 12:38:46.980665009 +0000 UTC m=+0.088105618 container attach bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020 (image=quay.io/ceph/ceph:v18, name=optimistic_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Nov 26 07:38:47 np0005536586 podman[95831]: 2025-11-26 12:38:46.909724588 +0000 UTC m=+0.017165207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:47 np0005536586 podman[95879]: 2025-11-26 12:38:47.111562719 +0000 UTC m=+0.026427272 container create 72bc072d911232c3e72fcc1058d25c66b8cf0bbfb4f676fb79823d4b119e6b5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williams, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:47 np0005536586 systemd[1]: Started libpod-conmon-72bc072d911232c3e72fcc1058d25c66b8cf0bbfb4f676fb79823d4b119e6b5c.scope.
Nov 26 07:38:47 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:47 np0005536586 podman[95879]: 2025-11-26 12:38:47.172116979 +0000 UTC m=+0.086981542 container init 72bc072d911232c3e72fcc1058d25c66b8cf0bbfb4f676fb79823d4b119e6b5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williams, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:47 np0005536586 podman[95879]: 2025-11-26 12:38:47.176273924 +0000 UTC m=+0.091138476 container start 72bc072d911232c3e72fcc1058d25c66b8cf0bbfb4f676fb79823d4b119e6b5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:47 np0005536586 podman[95879]: 2025-11-26 12:38:47.177428028 +0000 UTC m=+0.092292600 container attach 72bc072d911232c3e72fcc1058d25c66b8cf0bbfb4f676fb79823d4b119e6b5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 07:38:47 np0005536586 adoring_williams[95892]: 167 167
Nov 26 07:38:47 np0005536586 systemd[1]: libpod-72bc072d911232c3e72fcc1058d25c66b8cf0bbfb4f676fb79823d4b119e6b5c.scope: Deactivated successfully.
Nov 26 07:38:47 np0005536586 podman[95879]: 2025-11-26 12:38:47.179474178 +0000 UTC m=+0.094338851 container died 72bc072d911232c3e72fcc1058d25c66b8cf0bbfb4f676fb79823d4b119e6b5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:47 np0005536586 systemd[1]: var-lib-containers-storage-overlay-00197932d927709c508c8fc3970a45be4c9c1ef084f0e7f0c797526f5cb81151-merged.mount: Deactivated successfully.
Nov 26 07:38:47 np0005536586 podman[95879]: 2025-11-26 12:38:47.100608804 +0000 UTC m=+0.015473377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:47 np0005536586 podman[95879]: 2025-11-26 12:38:47.199260776 +0000 UTC m=+0.114125328 container remove 72bc072d911232c3e72fcc1058d25c66b8cf0bbfb4f676fb79823d4b119e6b5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williams, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 07:38:47 np0005536586 systemd[1]: libpod-conmon-72bc072d911232c3e72fcc1058d25c66b8cf0bbfb4f676fb79823d4b119e6b5c.scope: Deactivated successfully.
Nov 26 07:38:47 np0005536586 podman[95932]: 2025-11-26 12:38:47.310818704 +0000 UTC m=+0.027075188 container create 982e5212efe82a270424e8cb6e22f7dfe71c709c3d24769d89e0fec1e32f0d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rubin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:47 np0005536586 systemd[1]: Started libpod-conmon-982e5212efe82a270424e8cb6e22f7dfe71c709c3d24769d89e0fec1e32f0d8a.scope.
Nov 26 07:38:47 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:47 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14957c928b0f8ddb0de9d91276db324c787c950708c1b7475f84b824c929936/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:47 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14957c928b0f8ddb0de9d91276db324c787c950708c1b7475f84b824c929936/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:47 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14957c928b0f8ddb0de9d91276db324c787c950708c1b7475f84b824c929936/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:47 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14957c928b0f8ddb0de9d91276db324c787c950708c1b7475f84b824c929936/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:47 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14957c928b0f8ddb0de9d91276db324c787c950708c1b7475f84b824c929936/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:47 np0005536586 podman[95932]: 2025-11-26 12:38:47.372146237 +0000 UTC m=+0.088402720 container init 982e5212efe82a270424e8cb6e22f7dfe71c709c3d24769d89e0fec1e32f0d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rubin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 07:38:47 np0005536586 podman[95932]: 2025-11-26 12:38:47.376643245 +0000 UTC m=+0.092899729 container start 982e5212efe82a270424e8cb6e22f7dfe71c709c3d24769d89e0fec1e32f0d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rubin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 07:38:47 np0005536586 podman[95932]: 2025-11-26 12:38:47.377991466 +0000 UTC m=+0.094247950 container attach 982e5212efe82a270424e8cb6e22f7dfe71c709c3d24769d89e0fec1e32f0d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rubin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 07:38:47 np0005536586 podman[95932]: 2025-11-26 12:38:47.300016847 +0000 UTC m=+0.016273331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:47 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:38:47 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Nov 26 07:38:47 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 26 07:38:47 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 26 07:38:47 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:47 np0005536586 optimistic_easley[95844]: Scheduled rgw.rgw update...
Nov 26 07:38:47 np0005536586 systemd[1]: libpod-bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020.scope: Deactivated successfully.
Nov 26 07:38:47 np0005536586 conmon[95844]: conmon bda7263d224fb9c281fd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020.scope/container/memory.events
Nov 26 07:38:47 np0005536586 podman[95952]: 2025-11-26 12:38:47.490211328 +0000 UTC m=+0.018775545 container died bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020 (image=quay.io/ceph/ceph:v18, name=optimistic_easley, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:47 np0005536586 podman[95952]: 2025-11-26 12:38:47.509822011 +0000 UTC m=+0.038386208 container remove bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020 (image=quay.io/ceph/ceph:v18, name=optimistic_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:47 np0005536586 systemd[1]: libpod-conmon-bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020.scope: Deactivated successfully.
Nov 26 07:38:47 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/3736164057' entity='client.admin' 
Nov 26 07:38:47 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:47 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:47 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:38:47 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:47 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:38:47 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:47 np0005536586 systemd[1]: var-lib-containers-storage-overlay-e37c65679871074fd5ba7fa6c20af4bb15e102a99666cab40683b86549746126-merged.mount: Deactivated successfully.
Nov 26 07:38:47 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v57: 131 pgs: 34 peering, 97 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:38:48 np0005536586 cool_rubin[95945]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:38:48 np0005536586 cool_rubin[95945]: --> relative data size: 1.0
Nov 26 07:38:48 np0005536586 cool_rubin[95945]: --> All data devices are unavailable
Nov 26 07:38:48 np0005536586 systemd[1]: libpod-982e5212efe82a270424e8cb6e22f7dfe71c709c3d24769d89e0fec1e32f0d8a.scope: Deactivated successfully.
Nov 26 07:38:48 np0005536586 podman[95932]: 2025-11-26 12:38:48.207433622 +0000 UTC m=+0.923690106 container died 982e5212efe82a270424e8cb6e22f7dfe71c709c3d24769d89e0fec1e32f0d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 07:38:48 np0005536586 systemd[1]: var-lib-containers-storage-overlay-a14957c928b0f8ddb0de9d91276db324c787c950708c1b7475f84b824c929936-merged.mount: Deactivated successfully.
Nov 26 07:38:48 np0005536586 python3[96058]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:38:48 np0005536586 podman[95932]: 2025-11-26 12:38:48.25478933 +0000 UTC m=+0.971045814 container remove 982e5212efe82a270424e8cb6e22f7dfe71c709c3d24769d89e0fec1e32f0d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 26 07:38:48 np0005536586 systemd[1]: libpod-conmon-982e5212efe82a270424e8cb6e22f7dfe71c709c3d24769d89e0fec1e32f0d8a.scope: Deactivated successfully.
Nov 26 07:38:48 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 26 07:38:48 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 26 07:38:48 np0005536586 python3[96196]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764160728.0442-37099-40743055204557/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:38:48 np0005536586 ceph-mon[74966]: Saving service rgw.rgw spec with placement compute-0
Nov 26 07:38:48 np0005536586 podman[96297]: 2025-11-26 12:38:48.676826931 +0000 UTC m=+0.025504816 container create b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_agnesi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 07:38:48 np0005536586 systemd[1]: Started libpod-conmon-b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200.scope.
Nov 26 07:38:48 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:48 np0005536586 podman[96297]: 2025-11-26 12:38:48.720717723 +0000 UTC m=+0.069395618 container init b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_agnesi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 07:38:48 np0005536586 podman[96297]: 2025-11-26 12:38:48.726456 +0000 UTC m=+0.075133885 container start b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_agnesi, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 07:38:48 np0005536586 podman[96297]: 2025-11-26 12:38:48.727574035 +0000 UTC m=+0.076251930 container attach b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 07:38:48 np0005536586 strange_agnesi[96315]: 167 167
Nov 26 07:38:48 np0005536586 systemd[1]: libpod-b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200.scope: Deactivated successfully.
Nov 26 07:38:48 np0005536586 conmon[96315]: conmon b008384995b5a9d28264 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200.scope/container/memory.events
Nov 26 07:38:48 np0005536586 podman[96297]: 2025-11-26 12:38:48.730498438 +0000 UTC m=+0.079176323 container died b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_agnesi, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:48 np0005536586 systemd[1]: var-lib-containers-storage-overlay-62bc359fdcf477465e5818463bbced9d1256fc2711aec9341d2b38bac8659573-merged.mount: Deactivated successfully.
Nov 26 07:38:48 np0005536586 podman[96297]: 2025-11-26 12:38:48.749403277 +0000 UTC m=+0.098081162 container remove b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 07:38:48 np0005536586 podman[96297]: 2025-11-26 12:38:48.666934795 +0000 UTC m=+0.015612710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:48 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 26 07:38:48 np0005536586 systemd[1]: libpod-conmon-b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200.scope: Deactivated successfully.
Nov 26 07:38:48 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 26 07:38:48 np0005536586 podman[96359]: 2025-11-26 12:38:48.861065374 +0000 UTC m=+0.028549547 container create 14897a949240c9d3af4dc725f3f67ef863831aad2f799b42ef1227bcfc6236f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wozniak, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:48 np0005536586 python3[96343]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:48 np0005536586 systemd[1]: Started libpod-conmon-14897a949240c9d3af4dc725f3f67ef863831aad2f799b42ef1227bcfc6236f7.scope.
Nov 26 07:38:48 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:48 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ad871be9c09068b741df133bfbe9e7e34fd42600e01c0070d9005b0d7867ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:48 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ad871be9c09068b741df133bfbe9e7e34fd42600e01c0070d9005b0d7867ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:48 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ad871be9c09068b741df133bfbe9e7e34fd42600e01c0070d9005b0d7867ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:48 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ad871be9c09068b741df133bfbe9e7e34fd42600e01c0070d9005b0d7867ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:48 np0005536586 podman[96370]: 2025-11-26 12:38:48.909339578 +0000 UTC m=+0.029599875 container create 16bc417dc694ef028f4b5fa6ffaed48184f178c2c3b0adbae71490e0c468d128 (image=quay.io/ceph/ceph:v18, name=sharp_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 07:38:48 np0005536586 podman[96359]: 2025-11-26 12:38:48.911060285 +0000 UTC m=+0.078544469 container init 14897a949240c9d3af4dc725f3f67ef863831aad2f799b42ef1227bcfc6236f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wozniak, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 07:38:48 np0005536586 podman[96359]: 2025-11-26 12:38:48.918895438 +0000 UTC m=+0.086379612 container start 14897a949240c9d3af4dc725f3f67ef863831aad2f799b42ef1227bcfc6236f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 07:38:48 np0005536586 podman[96359]: 2025-11-26 12:38:48.919887324 +0000 UTC m=+0.087371499 container attach 14897a949240c9d3af4dc725f3f67ef863831aad2f799b42ef1227bcfc6236f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 07:38:48 np0005536586 systemd[1]: Started libpod-conmon-16bc417dc694ef028f4b5fa6ffaed48184f178c2c3b0adbae71490e0c468d128.scope.
Nov 26 07:38:48 np0005536586 podman[96359]: 2025-11-26 12:38:48.849211455 +0000 UTC m=+0.016695650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:48 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:48 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed44f283c0b7cf7d8c9d9a208e4d07a5278920343b9eb473f34da66ec4084cc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:48 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed44f283c0b7cf7d8c9d9a208e4d07a5278920343b9eb473f34da66ec4084cc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:48 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed44f283c0b7cf7d8c9d9a208e4d07a5278920343b9eb473f34da66ec4084cc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:48 np0005536586 podman[96370]: 2025-11-26 12:38:48.96750599 +0000 UTC m=+0.087766296 container init 16bc417dc694ef028f4b5fa6ffaed48184f178c2c3b0adbae71490e0c468d128 (image=quay.io/ceph/ceph:v18, name=sharp_jackson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 07:38:48 np0005536586 podman[96370]: 2025-11-26 12:38:48.972006755 +0000 UTC m=+0.092267051 container start 16bc417dc694ef028f4b5fa6ffaed48184f178c2c3b0adbae71490e0c468d128 (image=quay.io/ceph/ceph:v18, name=sharp_jackson, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:48 np0005536586 podman[96370]: 2025-11-26 12:38:48.973037375 +0000 UTC m=+0.093297672 container attach 16bc417dc694ef028f4b5fa6ffaed48184f178c2c3b0adbae71490e0c468d128 (image=quay.io/ceph/ceph:v18, name=sharp_jackson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 07:38:48 np0005536586 podman[96370]: 2025-11-26 12:38:48.897323584 +0000 UTC m=+0.017583880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:49 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 26 07:38:49 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 26 07:38:49 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:38:49 np0005536586 ceph-mgr[75236]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 26 07:38:49 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0[74962]: 2025-11-26T12:38:49.414+0000 7f8067058640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).mds e2 new map
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-26T12:38:49.414687+0000#012modified#0112025-11-26T12:38:49.414741+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 26 07:38:49 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 26 07:38:49 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:49 np0005536586 ceph-mgr[75236]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 26 07:38:49 np0005536586 systemd[1]: libpod-16bc417dc694ef028f4b5fa6ffaed48184f178c2c3b0adbae71490e0c468d128.scope: Deactivated successfully.
Nov 26 07:38:49 np0005536586 podman[96370]: 2025-11-26 12:38:49.438230978 +0000 UTC m=+0.558491274 container died 16bc417dc694ef028f4b5fa6ffaed48184f178c2c3b0adbae71490e0c468d128 (image=quay.io/ceph/ceph:v18, name=sharp_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:49 np0005536586 systemd[1]: var-lib-containers-storage-overlay-1ed44f283c0b7cf7d8c9d9a208e4d07a5278920343b9eb473f34da66ec4084cc-merged.mount: Deactivated successfully.
Nov 26 07:38:49 np0005536586 podman[96370]: 2025-11-26 12:38:49.46148722 +0000 UTC m=+0.581747515 container remove 16bc417dc694ef028f4b5fa6ffaed48184f178c2c3b0adbae71490e0c468d128 (image=quay.io/ceph/ceph:v18, name=sharp_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:49 np0005536586 systemd[1]: libpod-conmon-16bc417dc694ef028f4b5fa6ffaed48184f178c2c3b0adbae71490e0c468d128.scope: Deactivated successfully.
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]: {
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:    "0": [
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:        {
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "devices": [
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "/dev/loop3"
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            ],
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "lv_name": "ceph_lv0",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "lv_size": "21470642176",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "name": "ceph_lv0",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "tags": {
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.cluster_name": "ceph",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.crush_device_class": "",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.encrypted": "0",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.osd_id": "0",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.type": "block",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.vdo": "0"
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            },
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "type": "block",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "vg_name": "ceph_vg0"
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:        }
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:    ],
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:    "1": [
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:        {
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "devices": [
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "/dev/loop4"
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            ],
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "lv_name": "ceph_lv1",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "lv_size": "21470642176",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "name": "ceph_lv1",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "tags": {
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.cluster_name": "ceph",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.crush_device_class": "",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.encrypted": "0",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.osd_id": "1",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.type": "block",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.vdo": "0"
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            },
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "type": "block",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "vg_name": "ceph_vg1"
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:        }
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:    ],
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:    "2": [
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:        {
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "devices": [
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "/dev/loop5"
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            ],
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "lv_name": "ceph_lv2",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "lv_size": "21470642176",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "name": "ceph_lv2",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "tags": {
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.cluster_name": "ceph",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.crush_device_class": "",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.encrypted": "0",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.osd_id": "2",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.type": "block",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:                "ceph.vdo": "0"
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            },
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "type": "block",
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:            "vg_name": "ceph_vg2"
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:        }
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]:    ]
Nov 26 07:38:49 np0005536586 relaxed_wozniak[96379]: }
Nov 26 07:38:49 np0005536586 systemd[1]: libpod-14897a949240c9d3af4dc725f3f67ef863831aad2f799b42ef1227bcfc6236f7.scope: Deactivated successfully.
Nov 26 07:38:49 np0005536586 podman[96359]: 2025-11-26 12:38:49.574317646 +0000 UTC m=+0.741801820 container died 14897a949240c9d3af4dc725f3f67ef863831aad2f799b42ef1227bcfc6236f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 07:38:49 np0005536586 podman[96359]: 2025-11-26 12:38:49.604613817 +0000 UTC m=+0.772097990 container remove 14897a949240c9d3af4dc725f3f67ef863831aad2f799b42ef1227bcfc6236f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 07:38:49 np0005536586 systemd[1]: libpod-conmon-14897a949240c9d3af4dc725f3f67ef863831aad2f799b42ef1227bcfc6236f7.scope: Deactivated successfully.
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 26 07:38:49 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:49 np0005536586 systemd[1]: var-lib-containers-storage-overlay-b8ad871be9c09068b741df133bfbe9e7e34fd42600e01c0070d9005b0d7867ac-merged.mount: Deactivated successfully.
Nov 26 07:38:49 np0005536586 python3[96462]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:49 np0005536586 podman[96515]: 2025-11-26 12:38:49.758824295 +0000 UTC m=+0.029176413 container create 58c0cc91ee682b6075ea5f7da0065a1e181109b34d508e7413c748eae8166681 (image=quay.io/ceph/ceph:v18, name=ecstatic_ishizaka, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 07:38:49 np0005536586 systemd[1]: Started libpod-conmon-58c0cc91ee682b6075ea5f7da0065a1e181109b34d508e7413c748eae8166681.scope.
Nov 26 07:38:49 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:49 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7209af3e1e406b702564d5f908c68808509ea221c196a0871ce8bb2db6cf4788/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:49 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7209af3e1e406b702564d5f908c68808509ea221c196a0871ce8bb2db6cf4788/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:49 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7209af3e1e406b702564d5f908c68808509ea221c196a0871ce8bb2db6cf4788/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:49 np0005536586 podman[96515]: 2025-11-26 12:38:49.807772536 +0000 UTC m=+0.078124674 container init 58c0cc91ee682b6075ea5f7da0065a1e181109b34d508e7413c748eae8166681 (image=quay.io/ceph/ceph:v18, name=ecstatic_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 07:38:49 np0005536586 podman[96515]: 2025-11-26 12:38:49.81258415 +0000 UTC m=+0.082936268 container start 58c0cc91ee682b6075ea5f7da0065a1e181109b34d508e7413c748eae8166681 (image=quay.io/ceph/ceph:v18, name=ecstatic_ishizaka, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 07:38:49 np0005536586 podman[96515]: 2025-11-26 12:38:49.813855516 +0000 UTC m=+0.084207634 container attach 58c0cc91ee682b6075ea5f7da0065a1e181109b34d508e7413c748eae8166681 (image=quay.io/ceph/ceph:v18, name=ecstatic_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 07:38:49 np0005536586 podman[96515]: 2025-11-26 12:38:49.747150449 +0000 UTC m=+0.017502587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:49 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v59: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:38:50 np0005536586 podman[96611]: 2025-11-26 12:38:50.054662545 +0000 UTC m=+0.034428300 container create 8f215d46650030bb4aedf91517d3592b8d21873b3f9444198b2d466c64ec65bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 07:38:50 np0005536586 systemd[1]: Started libpod-conmon-8f215d46650030bb4aedf91517d3592b8d21873b3f9444198b2d466c64ec65bc.scope.
Nov 26 07:38:50 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:50 np0005536586 podman[96611]: 2025-11-26 12:38:50.106962989 +0000 UTC m=+0.086728742 container init 8f215d46650030bb4aedf91517d3592b8d21873b3f9444198b2d466c64ec65bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:50 np0005536586 podman[96611]: 2025-11-26 12:38:50.111516393 +0000 UTC m=+0.091282147 container start 8f215d46650030bb4aedf91517d3592b8d21873b3f9444198b2d466c64ec65bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:50 np0005536586 podman[96611]: 2025-11-26 12:38:50.11261923 +0000 UTC m=+0.092384985 container attach 8f215d46650030bb4aedf91517d3592b8d21873b3f9444198b2d466c64ec65bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_tu, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:50 np0005536586 musing_tu[96641]: 167 167
Nov 26 07:38:50 np0005536586 systemd[1]: libpod-8f215d46650030bb4aedf91517d3592b8d21873b3f9444198b2d466c64ec65bc.scope: Deactivated successfully.
Nov 26 07:38:50 np0005536586 podman[96611]: 2025-11-26 12:38:50.115535899 +0000 UTC m=+0.095301654 container died 8f215d46650030bb4aedf91517d3592b8d21873b3f9444198b2d466c64ec65bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_tu, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:50 np0005536586 systemd[1]: var-lib-containers-storage-overlay-e04ebd3e4396ff076163d9e232f433233387c310d8b80806be195ccadf5a1953-merged.mount: Deactivated successfully.
Nov 26 07:38:50 np0005536586 podman[96611]: 2025-11-26 12:38:50.036279555 +0000 UTC m=+0.016045319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:50 np0005536586 podman[96611]: 2025-11-26 12:38:50.136625311 +0000 UTC m=+0.116391064 container remove 8f215d46650030bb4aedf91517d3592b8d21873b3f9444198b2d466c64ec65bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_tu, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:50 np0005536586 systemd[1]: libpod-conmon-8f215d46650030bb4aedf91517d3592b8d21873b3f9444198b2d466c64ec65bc.scope: Deactivated successfully.
Nov 26 07:38:50 np0005536586 podman[96664]: 2025-11-26 12:38:50.246912636 +0000 UTC m=+0.026965871 container create e396a0dbaee94ed5eb484a8b55bfc8e6ee609509b12cef13dd1feaca31f7f5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:50 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:38:50 np0005536586 ceph-mgr[75236]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 26 07:38:50 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 26 07:38:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 26 07:38:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:50 np0005536586 ecstatic_ishizaka[96564]: Scheduled mds.cephfs update...
Nov 26 07:38:50 np0005536586 systemd[1]: Started libpod-conmon-e396a0dbaee94ed5eb484a8b55bfc8e6ee609509b12cef13dd1feaca31f7f5c3.scope.
Nov 26 07:38:50 np0005536586 systemd[1]: libpod-58c0cc91ee682b6075ea5f7da0065a1e181109b34d508e7413c748eae8166681.scope: Deactivated successfully.
Nov 26 07:38:50 np0005536586 podman[96515]: 2025-11-26 12:38:50.282709564 +0000 UTC m=+0.553061682 container died 58c0cc91ee682b6075ea5f7da0065a1e181109b34d508e7413c748eae8166681 (image=quay.io/ceph/ceph:v18, name=ecstatic_ishizaka, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 07:38:50 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:50 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575c8b53fac472ddaccf92b86cfb26afd89d4f4bdfcb39ea0cefe533ac2fcd23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:50 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575c8b53fac472ddaccf92b86cfb26afd89d4f4bdfcb39ea0cefe533ac2fcd23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:50 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575c8b53fac472ddaccf92b86cfb26afd89d4f4bdfcb39ea0cefe533ac2fcd23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:50 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575c8b53fac472ddaccf92b86cfb26afd89d4f4bdfcb39ea0cefe533ac2fcd23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:50 np0005536586 podman[96664]: 2025-11-26 12:38:50.308107648 +0000 UTC m=+0.088160883 container init e396a0dbaee94ed5eb484a8b55bfc8e6ee609509b12cef13dd1feaca31f7f5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mclean, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 07:38:50 np0005536586 podman[96664]: 2025-11-26 12:38:50.313983364 +0000 UTC m=+0.094036600 container start e396a0dbaee94ed5eb484a8b55bfc8e6ee609509b12cef13dd1feaca31f7f5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mclean, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 07:38:50 np0005536586 podman[96664]: 2025-11-26 12:38:50.315580285 +0000 UTC m=+0.095633521 container attach e396a0dbaee94ed5eb484a8b55bfc8e6ee609509b12cef13dd1feaca31f7f5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:50 np0005536586 podman[96515]: 2025-11-26 12:38:50.319663501 +0000 UTC m=+0.590015619 container remove 58c0cc91ee682b6075ea5f7da0065a1e181109b34d508e7413c748eae8166681 (image=quay.io/ceph/ceph:v18, name=ecstatic_ishizaka, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 07:38:50 np0005536586 systemd[1]: libpod-conmon-58c0cc91ee682b6075ea5f7da0065a1e181109b34d508e7413c748eae8166681.scope: Deactivated successfully.
Nov 26 07:38:50 np0005536586 podman[96664]: 2025-11-26 12:38:50.235521564 +0000 UTC m=+0.015574799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:50 np0005536586 ceph-mon[74966]: Saving service mds.cephfs spec with placement compute-0
Nov 26 07:38:50 np0005536586 ceph-mon[74966]: Saving service mds.cephfs spec with placement compute-0
Nov 26 07:38:50 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:50 np0005536586 systemd[1]: var-lib-containers-storage-overlay-7209af3e1e406b702564d5f908c68808509ea221c196a0871ce8bb2db6cf4788-merged.mount: Deactivated successfully.
Nov 26 07:38:50 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 26 07:38:50 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 26 07:38:50 np0005536586 python3[96773]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 07:38:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]: {
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:        "osd_id": 1,
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:        "type": "bluestore"
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:    },
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:        "osd_id": 2,
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:        "type": "bluestore"
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:    },
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:        "osd_id": 0,
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:        "type": "bluestore"
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]:    }
Nov 26 07:38:51 np0005536586 agitated_mclean[96680]: }
Nov 26 07:38:51 np0005536586 python3[96857]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764160730.610688-37129-213518900630394/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=c49cad1c73fc246f2066e2f44ed85f4bdde7800e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:38:51 np0005536586 systemd[1]: libpod-e396a0dbaee94ed5eb484a8b55bfc8e6ee609509b12cef13dd1feaca31f7f5c3.scope: Deactivated successfully.
Nov 26 07:38:51 np0005536586 podman[96664]: 2025-11-26 12:38:51.101689085 +0000 UTC m=+0.881742320 container died e396a0dbaee94ed5eb484a8b55bfc8e6ee609509b12cef13dd1feaca31f7f5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:51 np0005536586 systemd[1]: var-lib-containers-storage-overlay-575c8b53fac472ddaccf92b86cfb26afd89d4f4bdfcb39ea0cefe533ac2fcd23-merged.mount: Deactivated successfully.
Nov 26 07:38:51 np0005536586 podman[96664]: 2025-11-26 12:38:51.134711234 +0000 UTC m=+0.914764469 container remove e396a0dbaee94ed5eb484a8b55bfc8e6ee609509b12cef13dd1feaca31f7f5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:51 np0005536586 systemd[1]: libpod-conmon-e396a0dbaee94ed5eb484a8b55bfc8e6ee609509b12cef13dd1feaca31f7f5c3.scope: Deactivated successfully.
Nov 26 07:38:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:38:51 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:38:51 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:51 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.d deep-scrub starts
Nov 26 07:38:51 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.d deep-scrub ok
Nov 26 07:38:51 np0005536586 python3[97038]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:51 np0005536586 podman[97084]: 2025-11-26 12:38:51.499678677 +0000 UTC m=+0.028042508 container create b21c281db5ea120833b555bd977984cfe774050847be626e4ab0acf70a5be0c4 (image=quay.io/ceph/ceph:v18, name=gallant_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:51 np0005536586 systemd[1]: Started libpod-conmon-b21c281db5ea120833b555bd977984cfe774050847be626e4ab0acf70a5be0c4.scope.
Nov 26 07:38:51 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:51 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d7134245cbc77ef0f01f8f2473051d8a8ea48c9c1b4426681111ab22162d3bb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:51 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d7134245cbc77ef0f01f8f2473051d8a8ea48c9c1b4426681111ab22162d3bb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:51 np0005536586 podman[97084]: 2025-11-26 12:38:51.546642463 +0000 UTC m=+0.075006324 container init b21c281db5ea120833b555bd977984cfe774050847be626e4ab0acf70a5be0c4 (image=quay.io/ceph/ceph:v18, name=gallant_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:51 np0005536586 podman[97084]: 2025-11-26 12:38:51.551826212 +0000 UTC m=+0.080190052 container start b21c281db5ea120833b555bd977984cfe774050847be626e4ab0acf70a5be0c4 (image=quay.io/ceph/ceph:v18, name=gallant_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 07:38:51 np0005536586 podman[97084]: 2025-11-26 12:38:51.553166587 +0000 UTC m=+0.081530427 container attach b21c281db5ea120833b555bd977984cfe774050847be626e4ab0acf70a5be0c4 (image=quay.io/ceph/ceph:v18, name=gallant_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 07:38:51 np0005536586 podman[97084]: 2025-11-26 12:38:51.489043596 +0000 UTC m=+0.017407466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:51 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:51 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:51 np0005536586 podman[97156]: 2025-11-26 12:38:51.751499797 +0000 UTC m=+0.036650624 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 07:38:51 np0005536586 podman[97156]: 2025-11-26 12:38:51.83102046 +0000 UTC m=+0.116171268 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:51 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v60: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2663828596' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2663828596' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 26 07:38:52 np0005536586 systemd[1]: libpod-b21c281db5ea120833b555bd977984cfe774050847be626e4ab0acf70a5be0c4.scope: Deactivated successfully.
Nov 26 07:38:52 np0005536586 podman[97084]: 2025-11-26 12:38:52.04917903 +0000 UTC m=+0.577542871 container died b21c281db5ea120833b555bd977984cfe774050847be626e4ab0acf70a5be0c4 (image=quay.io/ceph/ceph:v18, name=gallant_buck, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:52 np0005536586 systemd[1]: var-lib-containers-storage-overlay-0d7134245cbc77ef0f01f8f2473051d8a8ea48c9c1b4426681111ab22162d3bb-merged.mount: Deactivated successfully.
Nov 26 07:38:52 np0005536586 podman[97084]: 2025-11-26 12:38:52.075187017 +0000 UTC m=+0.603550857 container remove b21c281db5ea120833b555bd977984cfe774050847be626e4ab0acf70a5be0c4 (image=quay.io/ceph/ceph:v18, name=gallant_buck, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:52 np0005536586 systemd[1]: libpod-conmon-b21c281db5ea120833b555bd977984cfe774050847be626e4ab0acf70a5be0c4.scope: Deactivated successfully.
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:52 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 26 07:38:52 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 26 07:38:52 np0005536586 python3[97424]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:52 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/2663828596' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/2663828596' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:52 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:52 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev da6f4f28-5072-4ae3-9595-ff4e2e68a273 does not exist
Nov 26 07:38:52 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 52a082a4-7168-4e4e-830b-3168386aba5c does not exist
Nov 26 07:38:52 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 23f2d4f4-ce91-4c7f-9410-3c88dd7e81c8 does not exist
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:38:52 np0005536586 podman[97437]: 2025-11-26 12:38:52.665366714 +0000 UTC m=+0.056658750 container create 40fb31c1c30946de36f01f0fd4e411697ec442bba608bddc8c2407ea6580ca3f (image=quay.io/ceph/ceph:v18, name=elated_mclaren, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:38:52 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:38:52 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 26 07:38:52 np0005536586 systemd[1]: Started libpod-conmon-40fb31c1c30946de36f01f0fd4e411697ec442bba608bddc8c2407ea6580ca3f.scope.
Nov 26 07:38:52 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 26 07:38:52 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:52 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c2dce4003744c699fad0e6d060ac3132b8237ef0ba4810b21fdb28d20477e5e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:52 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c2dce4003744c699fad0e6d060ac3132b8237ef0ba4810b21fdb28d20477e5e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:52 np0005536586 podman[97437]: 2025-11-26 12:38:52.714609664 +0000 UTC m=+0.105901709 container init 40fb31c1c30946de36f01f0fd4e411697ec442bba608bddc8c2407ea6580ca3f (image=quay.io/ceph/ceph:v18, name=elated_mclaren, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 07:38:52 np0005536586 podman[97437]: 2025-11-26 12:38:52.720593485 +0000 UTC m=+0.111885500 container start 40fb31c1c30946de36f01f0fd4e411697ec442bba608bddc8c2407ea6580ca3f (image=quay.io/ceph/ceph:v18, name=elated_mclaren, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 07:38:52 np0005536586 podman[97437]: 2025-11-26 12:38:52.724925031 +0000 UTC m=+0.116217066 container attach 40fb31c1c30946de36f01f0fd4e411697ec442bba608bddc8c2407ea6580ca3f (image=quay.io/ceph/ceph:v18, name=elated_mclaren, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:52 np0005536586 podman[97437]: 2025-11-26 12:38:52.651117105 +0000 UTC m=+0.042409150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:53 np0005536586 podman[97612]: 2025-11-26 12:38:53.077323724 +0000 UTC m=+0.024682041 container create 9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lumiere, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 26 07:38:53 np0005536586 systemd[1]: Started libpod-conmon-9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a.scope.
Nov 26 07:38:53 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:53 np0005536586 podman[97612]: 2025-11-26 12:38:53.128678558 +0000 UTC m=+0.076036895 container init 9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lumiere, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 07:38:53 np0005536586 podman[97612]: 2025-11-26 12:38:53.133470404 +0000 UTC m=+0.080828721 container start 9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:53 np0005536586 podman[97612]: 2025-11-26 12:38:53.134659062 +0000 UTC m=+0.082017380 container attach 9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 07:38:53 np0005536586 funny_lumiere[97625]: 167 167
Nov 26 07:38:53 np0005536586 systemd[1]: libpod-9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a.scope: Deactivated successfully.
Nov 26 07:38:53 np0005536586 conmon[97625]: conmon 9812cd3e44937140d798 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a.scope/container/memory.events
Nov 26 07:38:53 np0005536586 podman[97612]: 2025-11-26 12:38:53.137578106 +0000 UTC m=+0.084936423 container died 9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:53 np0005536586 systemd[1]: var-lib-containers-storage-overlay-2eaa0336e6fefa5578f3341062557e56b74e3def2d7a3db977af470202f307df-merged.mount: Deactivated successfully.
Nov 26 07:38:53 np0005536586 podman[97612]: 2025-11-26 12:38:53.161140997 +0000 UTC m=+0.108499314 container remove 9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:53 np0005536586 podman[97612]: 2025-11-26 12:38:53.067032492 +0000 UTC m=+0.014390829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:53 np0005536586 systemd[1]: libpod-conmon-9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a.scope: Deactivated successfully.
Nov 26 07:38:53 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 26 07:38:53 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2011144616' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 07:38:53 np0005536586 elated_mclaren[97470]: 
Nov 26 07:38:53 np0005536586 elated_mclaren[97470]: {"fsid":"f7d7fe93-41e5-51c4-b72d-63b38686102e","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":117,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":3,"osd_up_since":1764160707,"num_in_osds":3,"osd_in_since":1764160688,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":131}],"num_pgs":131,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83795968,"bytes_avail":64328130560,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-26T12:38:37.846513+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 26 07:38:53 np0005536586 systemd[1]: libpod-40fb31c1c30946de36f01f0fd4e411697ec442bba608bddc8c2407ea6580ca3f.scope: Deactivated successfully.
Nov 26 07:38:53 np0005536586 podman[97437]: 2025-11-26 12:38:53.225133292 +0000 UTC m=+0.616425316 container died 40fb31c1c30946de36f01f0fd4e411697ec442bba608bddc8c2407ea6580ca3f (image=quay.io/ceph/ceph:v18, name=elated_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 07:38:53 np0005536586 systemd[1]: var-lib-containers-storage-overlay-5c2dce4003744c699fad0e6d060ac3132b8237ef0ba4810b21fdb28d20477e5e-merged.mount: Deactivated successfully.
Nov 26 07:38:53 np0005536586 podman[97437]: 2025-11-26 12:38:53.249917604 +0000 UTC m=+0.641209629 container remove 40fb31c1c30946de36f01f0fd4e411697ec442bba608bddc8c2407ea6580ca3f (image=quay.io/ceph/ceph:v18, name=elated_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 07:38:53 np0005536586 systemd[1]: libpod-conmon-40fb31c1c30946de36f01f0fd4e411697ec442bba608bddc8c2407ea6580ca3f.scope: Deactivated successfully.
Nov 26 07:38:53 np0005536586 podman[97657]: 2025-11-26 12:38:53.294982778 +0000 UTC m=+0.036154025 container create 19f505eb41eae3a1d44cb510fdca3eb41942c4ad668ad17d6c3675b5748ad292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:53 np0005536586 systemd[1]: Started libpod-conmon-19f505eb41eae3a1d44cb510fdca3eb41942c4ad668ad17d6c3675b5748ad292.scope.
Nov 26 07:38:53 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caebc6b50543dacdba568243871181f2a00d3d14816f71705b1bb62376f18c23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caebc6b50543dacdba568243871181f2a00d3d14816f71705b1bb62376f18c23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caebc6b50543dacdba568243871181f2a00d3d14816f71705b1bb62376f18c23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caebc6b50543dacdba568243871181f2a00d3d14816f71705b1bb62376f18c23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caebc6b50543dacdba568243871181f2a00d3d14816f71705b1bb62376f18c23/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:53 np0005536586 podman[97657]: 2025-11-26 12:38:53.350995876 +0000 UTC m=+0.092167133 container init 19f505eb41eae3a1d44cb510fdca3eb41942c4ad668ad17d6c3675b5748ad292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 07:38:53 np0005536586 podman[97657]: 2025-11-26 12:38:53.35731451 +0000 UTC m=+0.098485758 container start 19f505eb41eae3a1d44cb510fdca3eb41942c4ad668ad17d6c3675b5748ad292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 07:38:53 np0005536586 podman[97657]: 2025-11-26 12:38:53.358708127 +0000 UTC m=+0.099879384 container attach 19f505eb41eae3a1d44cb510fdca3eb41942c4ad668ad17d6c3675b5748ad292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 07:38:53 np0005536586 podman[97657]: 2025-11-26 12:38:53.284431174 +0000 UTC m=+0.025602431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:53 np0005536586 python3[97702]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:53 np0005536586 podman[97703]: 2025-11-26 12:38:53.532876266 +0000 UTC m=+0.030204349 container create d9a3effa084fd0329d280de3157ad33e42fe356e4e515d16b9ff4ea4aba3cfb0 (image=quay.io/ceph/ceph:v18, name=compassionate_noyce, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 07:38:53 np0005536586 systemd[1]: Started libpod-conmon-d9a3effa084fd0329d280de3157ad33e42fe356e4e515d16b9ff4ea4aba3cfb0.scope.
Nov 26 07:38:53 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad06ef884eddb201d2143476b3c208c884d8768191ac51907122995f3fa2865a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad06ef884eddb201d2143476b3c208c884d8768191ac51907122995f3fa2865a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:53 np0005536586 podman[97703]: 2025-11-26 12:38:53.585120101 +0000 UTC m=+0.082448195 container init d9a3effa084fd0329d280de3157ad33e42fe356e4e515d16b9ff4ea4aba3cfb0 (image=quay.io/ceph/ceph:v18, name=compassionate_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 07:38:53 np0005536586 podman[97703]: 2025-11-26 12:38:53.589690569 +0000 UTC m=+0.087018652 container start d9a3effa084fd0329d280de3157ad33e42fe356e4e515d16b9ff4ea4aba3cfb0 (image=quay.io/ceph/ceph:v18, name=compassionate_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:53 np0005536586 podman[97703]: 2025-11-26 12:38:53.591824456 +0000 UTC m=+0.089152559 container attach d9a3effa084fd0329d280de3157ad33e42fe356e4e515d16b9ff4ea4aba3cfb0 (image=quay.io/ceph/ceph:v18, name=compassionate_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 07:38:53 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 26 07:38:53 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 26 07:38:53 np0005536586 podman[97703]: 2025-11-26 12:38:53.520999805 +0000 UTC m=+0.018327908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:53 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:38:53 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:53 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:38:53 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 26 07:38:53 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 26 07:38:53 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v61: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:38:54 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 07:38:54 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209780401' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 07:38:54 np0005536586 compassionate_noyce[97715]: 
Nov 26 07:38:54 np0005536586 compassionate_noyce[97715]: {"epoch":1,"fsid":"f7d7fe93-41e5-51c4-b72d-63b38686102e","modified":"2025-11-26T12:36:52.476654Z","created":"2025-11-26T12:36:52.476654Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 26 07:38:54 np0005536586 compassionate_noyce[97715]: dumped monmap epoch 1
Nov 26 07:38:54 np0005536586 systemd[1]: libpod-d9a3effa084fd0329d280de3157ad33e42fe356e4e515d16b9ff4ea4aba3cfb0.scope: Deactivated successfully.
Nov 26 07:38:54 np0005536586 podman[97703]: 2025-11-26 12:38:54.115614835 +0000 UTC m=+0.612942919 container died d9a3effa084fd0329d280de3157ad33e42fe356e4e515d16b9ff4ea4aba3cfb0 (image=quay.io/ceph/ceph:v18, name=compassionate_noyce, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 07:38:54 np0005536586 systemd[1]: var-lib-containers-storage-overlay-ad06ef884eddb201d2143476b3c208c884d8768191ac51907122995f3fa2865a-merged.mount: Deactivated successfully.
Nov 26 07:38:54 np0005536586 podman[97703]: 2025-11-26 12:38:54.140354784 +0000 UTC m=+0.637682866 container remove d9a3effa084fd0329d280de3157ad33e42fe356e4e515d16b9ff4ea4aba3cfb0 (image=quay.io/ceph/ceph:v18, name=compassionate_noyce, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 26 07:38:54 np0005536586 systemd[1]: libpod-conmon-d9a3effa084fd0329d280de3157ad33e42fe356e4e515d16b9ff4ea4aba3cfb0.scope: Deactivated successfully.
Nov 26 07:38:54 np0005536586 hopeful_kepler[97672]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:38:54 np0005536586 hopeful_kepler[97672]: --> relative data size: 1.0
Nov 26 07:38:54 np0005536586 hopeful_kepler[97672]: --> All data devices are unavailable
Nov 26 07:38:54 np0005536586 systemd[1]: libpod-19f505eb41eae3a1d44cb510fdca3eb41942c4ad668ad17d6c3675b5748ad292.scope: Deactivated successfully.
Nov 26 07:38:54 np0005536586 podman[97657]: 2025-11-26 12:38:54.186035102 +0000 UTC m=+0.927206349 container died 19f505eb41eae3a1d44cb510fdca3eb41942c4ad668ad17d6c3675b5748ad292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 07:38:54 np0005536586 systemd[1]: var-lib-containers-storage-overlay-caebc6b50543dacdba568243871181f2a00d3d14816f71705b1bb62376f18c23-merged.mount: Deactivated successfully.
Nov 26 07:38:54 np0005536586 podman[97657]: 2025-11-26 12:38:54.216109684 +0000 UTC m=+0.957280931 container remove 19f505eb41eae3a1d44cb510fdca3eb41942c4ad668ad17d6c3675b5748ad292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:54 np0005536586 systemd[1]: libpod-conmon-19f505eb41eae3a1d44cb510fdca3eb41942c4ad668ad17d6c3675b5748ad292.scope: Deactivated successfully.
Nov 26 07:38:54 np0005536586 python3[97909]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:54 np0005536586 podman[97930]: 2025-11-26 12:38:54.604391679 +0000 UTC m=+0.034693502 container create 3abd7fbcfef9545abb22e15b9407617d19015c47db8253f118c7d81514f5ea9e (image=quay.io/ceph/ceph:v18, name=adoring_fermat, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:54 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.16 deep-scrub starts
Nov 26 07:38:54 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.16 deep-scrub ok
Nov 26 07:38:54 np0005536586 systemd[1]: Started libpod-conmon-3abd7fbcfef9545abb22e15b9407617d19015c47db8253f118c7d81514f5ea9e.scope.
Nov 26 07:38:54 np0005536586 podman[97950]: 2025-11-26 12:38:54.657086819 +0000 UTC m=+0.041797532 container create 80acaae640b503a0a152327cbad0cede773f65edee42d947b36c1e1fcf046a55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 07:38:54 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83166193ac9c4a6994358f41d6f07a10818eb635d01d588716f4e0e0e8b277af/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83166193ac9c4a6994358f41d6f07a10818eb635d01d588716f4e0e0e8b277af/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:54 np0005536586 podman[97930]: 2025-11-26 12:38:54.67515289 +0000 UTC m=+0.105454732 container init 3abd7fbcfef9545abb22e15b9407617d19015c47db8253f118c7d81514f5ea9e (image=quay.io/ceph/ceph:v18, name=adoring_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 07:38:54 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 26 07:38:54 np0005536586 systemd[1]: Started libpod-conmon-80acaae640b503a0a152327cbad0cede773f65edee42d947b36c1e1fcf046a55.scope.
Nov 26 07:38:54 np0005536586 podman[97930]: 2025-11-26 12:38:54.681815465 +0000 UTC m=+0.112117288 container start 3abd7fbcfef9545abb22e15b9407617d19015c47db8253f118c7d81514f5ea9e (image=quay.io/ceph/ceph:v18, name=adoring_fermat, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:54 np0005536586 podman[97930]: 2025-11-26 12:38:54.586630934 +0000 UTC m=+0.016932758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:54 np0005536586 podman[97930]: 2025-11-26 12:38:54.686927277 +0000 UTC m=+0.117229100 container attach 3abd7fbcfef9545abb22e15b9407617d19015c47db8253f118c7d81514f5ea9e (image=quay.io/ceph/ceph:v18, name=adoring_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:54 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 26 07:38:54 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:54 np0005536586 podman[97950]: 2025-11-26 12:38:54.706225229 +0000 UTC m=+0.090935942 container init 80acaae640b503a0a152327cbad0cede773f65edee42d947b36c1e1fcf046a55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_panini, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 07:38:54 np0005536586 podman[97950]: 2025-11-26 12:38:54.710785928 +0000 UTC m=+0.095496621 container start 80acaae640b503a0a152327cbad0cede773f65edee42d947b36c1e1fcf046a55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_panini, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:54 np0005536586 podman[97950]: 2025-11-26 12:38:54.711856674 +0000 UTC m=+0.096567397 container attach 80acaae640b503a0a152327cbad0cede773f65edee42d947b36c1e1fcf046a55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_panini, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 07:38:54 np0005536586 naughty_panini[97969]: 167 167
Nov 26 07:38:54 np0005536586 systemd[1]: libpod-80acaae640b503a0a152327cbad0cede773f65edee42d947b36c1e1fcf046a55.scope: Deactivated successfully.
Nov 26 07:38:54 np0005536586 podman[97950]: 2025-11-26 12:38:54.713896593 +0000 UTC m=+0.098607306 container died 80acaae640b503a0a152327cbad0cede773f65edee42d947b36c1e1fcf046a55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_panini, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:54 np0005536586 systemd[1]: var-lib-containers-storage-overlay-23315a006364f2b7fa7662a5ff6b068b04e44a01f6d2600eefa498eebd3302bb-merged.mount: Deactivated successfully.
Nov 26 07:38:54 np0005536586 podman[97950]: 2025-11-26 12:38:54.643070749 +0000 UTC m=+0.027781472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:54 np0005536586 podman[97950]: 2025-11-26 12:38:54.741468189 +0000 UTC m=+0.126178892 container remove 80acaae640b503a0a152327cbad0cede773f65edee42d947b36c1e1fcf046a55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_panini, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 26 07:38:54 np0005536586 systemd[1]: libpod-conmon-80acaae640b503a0a152327cbad0cede773f65edee42d947b36c1e1fcf046a55.scope: Deactivated successfully.
Nov 26 07:38:54 np0005536586 podman[97990]: 2025-11-26 12:38:54.85134136 +0000 UTC m=+0.026439192 container create 7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 07:38:54 np0005536586 systemd[1]: Started libpod-conmon-7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c.scope.
Nov 26 07:38:54 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4dea14f4c959ec6d644392073fe87c8a0ac53caeb6c494f6d362064ae0b974/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4dea14f4c959ec6d644392073fe87c8a0ac53caeb6c494f6d362064ae0b974/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4dea14f4c959ec6d644392073fe87c8a0ac53caeb6c494f6d362064ae0b974/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:54 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4dea14f4c959ec6d644392073fe87c8a0ac53caeb6c494f6d362064ae0b974/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:54 np0005536586 podman[97990]: 2025-11-26 12:38:54.913063978 +0000 UTC m=+0.088161809 container init 7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 07:38:54 np0005536586 podman[97990]: 2025-11-26 12:38:54.918024264 +0000 UTC m=+0.093122096 container start 7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:54 np0005536586 podman[97990]: 2025-11-26 12:38:54.91925764 +0000 UTC m=+0.094355470 container attach 7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_matsumoto, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:54 np0005536586 podman[97990]: 2025-11-26 12:38:54.840650285 +0000 UTC m=+0.015748136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 26 07:38:55 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1798191645' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 26 07:38:55 np0005536586 adoring_fermat[97961]: [client.openstack]
Nov 26 07:38:55 np0005536586 adoring_fermat[97961]: #011key = AQBP9CZpAAAAABAAMO+aLuzMDoNYc4bplXQ8ZQ==
Nov 26 07:38:55 np0005536586 adoring_fermat[97961]: #011caps mgr = "allow *"
Nov 26 07:38:55 np0005536586 adoring_fermat[97961]: #011caps mon = "profile rbd"
Nov 26 07:38:55 np0005536586 adoring_fermat[97961]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 26 07:38:55 np0005536586 systemd[1]: libpod-3abd7fbcfef9545abb22e15b9407617d19015c47db8253f118c7d81514f5ea9e.scope: Deactivated successfully.
Nov 26 07:38:55 np0005536586 podman[97930]: 2025-11-26 12:38:55.196216258 +0000 UTC m=+0.626518081 container died 3abd7fbcfef9545abb22e15b9407617d19015c47db8253f118c7d81514f5ea9e (image=quay.io/ceph/ceph:v18, name=adoring_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 26 07:38:55 np0005536586 systemd[1]: var-lib-containers-storage-overlay-83166193ac9c4a6994358f41d6f07a10818eb635d01d588716f4e0e0e8b277af-merged.mount: Deactivated successfully.
Nov 26 07:38:55 np0005536586 podman[97930]: 2025-11-26 12:38:55.220656742 +0000 UTC m=+0.650958565 container remove 3abd7fbcfef9545abb22e15b9407617d19015c47db8253f118c7d81514f5ea9e (image=quay.io/ceph/ceph:v18, name=adoring_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 07:38:55 np0005536586 systemd[1]: libpod-conmon-3abd7fbcfef9545abb22e15b9407617d19015c47db8253f118c7d81514f5ea9e.scope: Deactivated successfully.
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]: {
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:    "0": [
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:        {
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "devices": [
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "/dev/loop3"
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            ],
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "lv_name": "ceph_lv0",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "lv_size": "21470642176",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "name": "ceph_lv0",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "tags": {
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.cluster_name": "ceph",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.crush_device_class": "",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.encrypted": "0",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.osd_id": "0",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.type": "block",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.vdo": "0"
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            },
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "type": "block",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "vg_name": "ceph_vg0"
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:        }
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:    ],
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:    "1": [
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:        {
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "devices": [
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "/dev/loop4"
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            ],
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "lv_name": "ceph_lv1",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "lv_size": "21470642176",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "name": "ceph_lv1",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "tags": {
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.cluster_name": "ceph",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.crush_device_class": "",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.encrypted": "0",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.osd_id": "1",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.type": "block",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.vdo": "0"
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            },
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "type": "block",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "vg_name": "ceph_vg1"
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:        }
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:    ],
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:    "2": [
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:        {
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "devices": [
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "/dev/loop5"
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            ],
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "lv_name": "ceph_lv2",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "lv_size": "21470642176",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "name": "ceph_lv2",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "tags": {
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.cluster_name": "ceph",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.crush_device_class": "",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.encrypted": "0",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.osd_id": "2",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.type": "block",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:                "ceph.vdo": "0"
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            },
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "type": "block",
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:            "vg_name": "ceph_vg2"
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:        }
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]:    ]
Nov 26 07:38:55 np0005536586 quizzical_matsumoto[98003]: }
Nov 26 07:38:55 np0005536586 systemd[1]: libpod-7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c.scope: Deactivated successfully.
Nov 26 07:38:55 np0005536586 conmon[98003]: conmon 7cbb2fb69fefb087a106 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c.scope/container/memory.events
Nov 26 07:38:55 np0005536586 podman[97990]: 2025-11-26 12:38:55.558511802 +0000 UTC m=+0.733609633 container died 7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 07:38:55 np0005536586 systemd[1]: var-lib-containers-storage-overlay-5a4dea14f4c959ec6d644392073fe87c8a0ac53caeb6c494f6d362064ae0b974-merged.mount: Deactivated successfully.
Nov 26 07:38:55 np0005536586 podman[97990]: 2025-11-26 12:38:55.590818288 +0000 UTC m=+0.765916119 container remove 7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_matsumoto, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 07:38:55 np0005536586 systemd[1]: libpod-conmon-7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c.scope: Deactivated successfully.
Nov 26 07:38:55 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/1798191645' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 26 07:38:55 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v62: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:38:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:38:56 np0005536586 podman[98182]: 2025-11-26 12:38:56.014197826 +0000 UTC m=+0.027293512 container create 3dcef195c0ce7105061e4d85368efc2937241d8c9932b321c4fe528991205108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:56 np0005536586 systemd[1]: Started libpod-conmon-3dcef195c0ce7105061e4d85368efc2937241d8c9932b321c4fe528991205108.scope.
Nov 26 07:38:56 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:56 np0005536586 podman[98182]: 2025-11-26 12:38:56.069018819 +0000 UTC m=+0.082114514 container init 3dcef195c0ce7105061e4d85368efc2937241d8c9932b321c4fe528991205108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 07:38:56 np0005536586 podman[98182]: 2025-11-26 12:38:56.073911989 +0000 UTC m=+0.087007665 container start 3dcef195c0ce7105061e4d85368efc2937241d8c9932b321c4fe528991205108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 07:38:56 np0005536586 podman[98182]: 2025-11-26 12:38:56.075071954 +0000 UTC m=+0.088167631 container attach 3dcef195c0ce7105061e4d85368efc2937241d8c9932b321c4fe528991205108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 26 07:38:56 np0005536586 nostalgic_franklin[98214]: 167 167
Nov 26 07:38:56 np0005536586 systemd[1]: libpod-3dcef195c0ce7105061e4d85368efc2937241d8c9932b321c4fe528991205108.scope: Deactivated successfully.
Nov 26 07:38:56 np0005536586 podman[98182]: 2025-11-26 12:38:56.077317947 +0000 UTC m=+0.090413623 container died 3dcef195c0ce7105061e4d85368efc2937241d8c9932b321c4fe528991205108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:56 np0005536586 systemd[1]: var-lib-containers-storage-overlay-de888c48f736ee103f8bdd39f5bc2ee61353bf0d1f59b9782fda337fda75866b-merged.mount: Deactivated successfully.
Nov 26 07:38:56 np0005536586 podman[98182]: 2025-11-26 12:38:56.097372056 +0000 UTC m=+0.110467732 container remove 3dcef195c0ce7105061e4d85368efc2937241d8c9932b321c4fe528991205108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:56 np0005536586 podman[98182]: 2025-11-26 12:38:56.003162962 +0000 UTC m=+0.016258658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:56 np0005536586 systemd[1]: libpod-conmon-3dcef195c0ce7105061e4d85368efc2937241d8c9932b321c4fe528991205108.scope: Deactivated successfully.
Nov 26 07:38:56 np0005536586 podman[98299]: 2025-11-26 12:38:56.213480155 +0000 UTC m=+0.030896961 container create 28ca2878c5fa30557cd44cf76a5694213e18361bd6a9cc8b896a3ec9244f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 07:38:56 np0005536586 systemd[1]: Started libpod-conmon-28ca2878c5fa30557cd44cf76a5694213e18361bd6a9cc8b896a3ec9244f771e.scope.
Nov 26 07:38:56 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.13 deep-scrub starts
Nov 26 07:38:56 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:56 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e46bcd953ba28fdf7e821835170edb4d767f9f1ca28c9369a9133519abc1e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:56 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e46bcd953ba28fdf7e821835170edb4d767f9f1ca28c9369a9133519abc1e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:56 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e46bcd953ba28fdf7e821835170edb4d767f9f1ca28c9369a9133519abc1e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:56 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e46bcd953ba28fdf7e821835170edb4d767f9f1ca28c9369a9133519abc1e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:56 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.13 deep-scrub ok
Nov 26 07:38:56 np0005536586 podman[98299]: 2025-11-26 12:38:56.265264094 +0000 UTC m=+0.082680910 container init 28ca2878c5fa30557cd44cf76a5694213e18361bd6a9cc8b896a3ec9244f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 07:38:56 np0005536586 podman[98299]: 2025-11-26 12:38:56.271915298 +0000 UTC m=+0.089332094 container start 28ca2878c5fa30557cd44cf76a5694213e18361bd6a9cc8b896a3ec9244f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:56 np0005536586 podman[98299]: 2025-11-26 12:38:56.273002577 +0000 UTC m=+0.090419362 container attach 28ca2878c5fa30557cd44cf76a5694213e18361bd6a9cc8b896a3ec9244f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 07:38:56 np0005536586 podman[98299]: 2025-11-26 12:38:56.201267773 +0000 UTC m=+0.018684589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:56 np0005536586 ansible-async_wrapper.py[98385]: Invoked with j562982286282 30 /home/zuul/.ansible/tmp/ansible-tmp-1764160736.0507755-37201-152950182947434/AnsiballZ_command.py _
Nov 26 07:38:56 np0005536586 ansible-async_wrapper.py[98388]: Starting module and watcher
Nov 26 07:38:56 np0005536586 ansible-async_wrapper.py[98388]: Start watching 98389 (30)
Nov 26 07:38:56 np0005536586 ansible-async_wrapper.py[98389]: Start module (98389)
Nov 26 07:38:56 np0005536586 ansible-async_wrapper.py[98385]: Return async_wrapper task started.
Nov 26 07:38:56 np0005536586 python3[98390]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:56 np0005536586 podman[98391]: 2025-11-26 12:38:56.595533259 +0000 UTC m=+0.029962490 container create 7e9646a7c362e433a745c87ff9e89692eeb4b078851a7f94100f47031243451a (image=quay.io/ceph/ceph:v18, name=quirky_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 07:38:56 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 26 07:38:56 np0005536586 systemd[1]: Started libpod-conmon-7e9646a7c362e433a745c87ff9e89692eeb4b078851a7f94100f47031243451a.scope.
Nov 26 07:38:56 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 26 07:38:56 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:56 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cdaa26a384c24a12f358da6a6db0bf1cdd284f5f125f70124da3bad87fa3b29/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:56 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cdaa26a384c24a12f358da6a6db0bf1cdd284f5f125f70124da3bad87fa3b29/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:56 np0005536586 podman[98391]: 2025-11-26 12:38:56.643285862 +0000 UTC m=+0.077715094 container init 7e9646a7c362e433a745c87ff9e89692eeb4b078851a7f94100f47031243451a (image=quay.io/ceph/ceph:v18, name=quirky_mendeleev, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:56 np0005536586 podman[98391]: 2025-11-26 12:38:56.647481919 +0000 UTC m=+0.081911151 container start 7e9646a7c362e433a745c87ff9e89692eeb4b078851a7f94100f47031243451a (image=quay.io/ceph/ceph:v18, name=quirky_mendeleev, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:38:56 np0005536586 podman[98391]: 2025-11-26 12:38:56.648607359 +0000 UTC m=+0.083036591 container attach 7e9646a7c362e433a745c87ff9e89692eeb4b078851a7f94100f47031243451a (image=quay.io/ceph/ceph:v18, name=quirky_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:56 np0005536586 podman[98391]: 2025-11-26 12:38:56.583357645 +0000 UTC m=+0.017786888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]: {
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:        "osd_id": 1,
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:        "type": "bluestore"
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:    },
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:        "osd_id": 2,
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:        "type": "bluestore"
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:    },
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:        "osd_id": 0,
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:        "type": "bluestore"
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]:    }
Nov 26 07:38:57 np0005536586 gifted_chaum[98353]: }
Nov 26 07:38:57 np0005536586 systemd[1]: libpod-28ca2878c5fa30557cd44cf76a5694213e18361bd6a9cc8b896a3ec9244f771e.scope: Deactivated successfully.
Nov 26 07:38:57 np0005536586 podman[98299]: 2025-11-26 12:38:57.044187533 +0000 UTC m=+0.861604330 container died 28ca2878c5fa30557cd44cf76a5694213e18361bd6a9cc8b896a3ec9244f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 07:38:57 np0005536586 systemd[1]: var-lib-containers-storage-overlay-b4e46bcd953ba28fdf7e821835170edb4d767f9f1ca28c9369a9133519abc1e8-merged.mount: Deactivated successfully.
Nov 26 07:38:57 np0005536586 podman[98299]: 2025-11-26 12:38:57.07374537 +0000 UTC m=+0.891162167 container remove 28ca2878c5fa30557cd44cf76a5694213e18361bd6a9cc8b896a3ec9244f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:57 np0005536586 systemd[1]: libpod-conmon-28ca2878c5fa30557cd44cf76a5694213e18361bd6a9cc8b896a3ec9244f771e.scope: Deactivated successfully.
Nov 26 07:38:57 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 07:38:57 np0005536586 quirky_mendeleev[98403]: 
Nov 26 07:38:57 np0005536586 quirky_mendeleev[98403]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 26 07:38:57 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:38:57 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:57 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:38:57 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:57 np0005536586 ceph-mgr[75236]: [progress INFO root] update: starting ev 30acd692-57bd-49fd-ae87-1be5cad78c57 (Updating rgw.rgw deployment (+1 -> 1))
Nov 26 07:38:57 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.cpfqrx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 26 07:38:57 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.cpfqrx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 26 07:38:57 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.cpfqrx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 26 07:38:57 np0005536586 systemd[1]: libpod-7e9646a7c362e433a745c87ff9e89692eeb4b078851a7f94100f47031243451a.scope: Deactivated successfully.
Nov 26 07:38:57 np0005536586 podman[98391]: 2025-11-26 12:38:57.111715691 +0000 UTC m=+0.546144933 container died 7e9646a7c362e433a745c87ff9e89692eeb4b078851a7f94100f47031243451a (image=quay.io/ceph/ceph:v18, name=quirky_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:38:57 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 26 07:38:57 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:57 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:38:57 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:38:57 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.cpfqrx on compute-0
Nov 26 07:38:57 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.cpfqrx on compute-0
Nov 26 07:38:57 np0005536586 systemd[1]: var-lib-containers-storage-overlay-3cdaa26a384c24a12f358da6a6db0bf1cdd284f5f125f70124da3bad87fa3b29-merged.mount: Deactivated successfully.
Nov 26 07:38:57 np0005536586 podman[98391]: 2025-11-26 12:38:57.134576529 +0000 UTC m=+0.569005760 container remove 7e9646a7c362e433a745c87ff9e89692eeb4b078851a7f94100f47031243451a (image=quay.io/ceph/ceph:v18, name=quirky_mendeleev, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:57 np0005536586 systemd[1]: libpod-conmon-7e9646a7c362e433a745c87ff9e89692eeb4b078851a7f94100f47031243451a.scope: Deactivated successfully.
Nov 26 07:38:57 np0005536586 ansible-async_wrapper.py[98389]: Module complete (98389)
Nov 26 07:38:57 np0005536586 podman[98632]: 2025-11-26 12:38:57.536367074 +0000 UTC m=+0.028566300 container create e9dd07b3cf31d3a4bd89d06a2e69960cd74fb4d2f4a3dc9fb4d719b3733689ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jones, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 07:38:57 np0005536586 systemd[1]: Started libpod-conmon-e9dd07b3cf31d3a4bd89d06a2e69960cd74fb4d2f4a3dc9fb4d719b3733689ca.scope.
Nov 26 07:38:57 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:57 np0005536586 podman[98632]: 2025-11-26 12:38:57.587401742 +0000 UTC m=+0.079600968 container init e9dd07b3cf31d3a4bd89d06a2e69960cd74fb4d2f4a3dc9fb4d719b3733689ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jones, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:57 np0005536586 podman[98632]: 2025-11-26 12:38:57.591557884 +0000 UTC m=+0.083757110 container start e9dd07b3cf31d3a4bd89d06a2e69960cd74fb4d2f4a3dc9fb4d719b3733689ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jones, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:57 np0005536586 podman[98632]: 2025-11-26 12:38:57.592529655 +0000 UTC m=+0.084728881 container attach e9dd07b3cf31d3a4bd89d06a2e69960cd74fb4d2f4a3dc9fb4d719b3733689ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 26 07:38:57 np0005536586 determined_jones[98670]: 167 167
Nov 26 07:38:57 np0005536586 systemd[1]: libpod-e9dd07b3cf31d3a4bd89d06a2e69960cd74fb4d2f4a3dc9fb4d719b3733689ca.scope: Deactivated successfully.
Nov 26 07:38:57 np0005536586 podman[98632]: 2025-11-26 12:38:57.595526573 +0000 UTC m=+0.087725799 container died e9dd07b3cf31d3a4bd89d06a2e69960cd74fb4d2f4a3dc9fb4d719b3733689ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 07:38:57 np0005536586 systemd[1]: var-lib-containers-storage-overlay-3832eadfe4fce13cf2817ff5954d48f324a5484f1635ac8ce01be643a41c36e8-merged.mount: Deactivated successfully.
Nov 26 07:38:57 np0005536586 podman[98632]: 2025-11-26 12:38:57.611644113 +0000 UTC m=+0.103843339 container remove e9dd07b3cf31d3a4bd89d06a2e69960cd74fb4d2f4a3dc9fb4d719b3733689ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jones, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 07:38:57 np0005536586 podman[98632]: 2025-11-26 12:38:57.524643763 +0000 UTC m=+0.016842999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:57 np0005536586 systemd[1]: libpod-conmon-e9dd07b3cf31d3a4bd89d06a2e69960cd74fb4d2f4a3dc9fb4d719b3733689ca.scope: Deactivated successfully.
Nov 26 07:38:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.cpfqrx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 26 07:38:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.cpfqrx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 26 07:38:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:57 np0005536586 ceph-mon[74966]: Deploying daemon rgw.rgw.compute-0.cpfqrx on compute-0
Nov 26 07:38:57 np0005536586 systemd[1]: Reloading.
Nov 26 07:38:57 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 26 07:38:57 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 26 07:38:57 np0005536586 python3[98672]: ansible-ansible.legacy.async_status Invoked with jid=j562982286282.98385 mode=status _async_dir=/root/.ansible_async
Nov 26 07:38:57 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:38:57 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:38:57 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v63: 131 pgs: 1 active+clean+scrubbing, 130 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:38:57 np0005536586 systemd[1]: Reloading.
Nov 26 07:38:57 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:38:57 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:38:57 np0005536586 python3[98774]: ansible-ansible.legacy.async_status Invoked with jid=j562982286282.98385 mode=cleanup _async_dir=/root/.ansible_async
Nov 26 07:38:58 np0005536586 systemd[1]: Starting Ceph rgw.rgw.compute-0.cpfqrx for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 07:38:58 np0005536586 podman[98853]: 2025-11-26 12:38:58.293003347 +0000 UTC m=+0.030078210 container create 24825469580cdc96d0d87c6665027cd044645eaba023b274e077df7b42c86760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-rgw-rgw-compute-0-cpfqrx, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 07:38:58 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29e8f225175f9bba21d8c8460dfd9699a96cfb3e839890002fddee43c7b13d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:58 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29e8f225175f9bba21d8c8460dfd9699a96cfb3e839890002fddee43c7b13d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:58 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29e8f225175f9bba21d8c8460dfd9699a96cfb3e839890002fddee43c7b13d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:58 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29e8f225175f9bba21d8c8460dfd9699a96cfb3e839890002fddee43c7b13d8/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.cpfqrx supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:58 np0005536586 podman[98853]: 2025-11-26 12:38:58.332397959 +0000 UTC m=+0.069472823 container init 24825469580cdc96d0d87c6665027cd044645eaba023b274e077df7b42c86760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-rgw-rgw-compute-0-cpfqrx, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 26 07:38:58 np0005536586 podman[98853]: 2025-11-26 12:38:58.336178343 +0000 UTC m=+0.073253207 container start 24825469580cdc96d0d87c6665027cd044645eaba023b274e077df7b42c86760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-rgw-rgw-compute-0-cpfqrx, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:58 np0005536586 bash[98853]: 24825469580cdc96d0d87c6665027cd044645eaba023b274e077df7b42c86760
Nov 26 07:38:58 np0005536586 podman[98853]: 2025-11-26 12:38:58.280897875 +0000 UTC m=+0.017972759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:58 np0005536586 systemd[1]: Started Ceph rgw.rgw.compute-0.cpfqrx for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:58 np0005536586 radosgw[98869]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 26 07:38:58 np0005536586 radosgw[98869]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 26 07:38:58 np0005536586 radosgw[98869]: framework: beast
Nov 26 07:38:58 np0005536586 ceph-mgr[75236]: [progress INFO root] complete: finished ev 30acd692-57bd-49fd-ae87-1be5cad78c57 (Updating rgw.rgw deployment (+1 -> 1))
Nov 26 07:38:58 np0005536586 ceph-mgr[75236]: [progress INFO root] Completed event 30acd692-57bd-49fd-ae87-1be5cad78c57 (Updating rgw.rgw deployment (+1 -> 1)) in 1 seconds
Nov 26 07:38:58 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Nov 26 07:38:58 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 26 07:38:58 np0005536586 radosgw[98869]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 26 07:38:58 np0005536586 radosgw[98869]: init_numa not setting numa affinity
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:58 np0005536586 ceph-mgr[75236]: [progress INFO root] update: starting ev 13887e53-0170-459f-8503-4b4ba35e9b94 (Updating mds.cephfs deployment (+1 -> 1))
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ipyiim", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ipyiim", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ipyiim", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:38:58 np0005536586 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.ipyiim on compute-0
Nov 26 07:38:58 np0005536586 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.ipyiim on compute-0
Nov 26 07:38:58 np0005536586 python3[98915]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:58 np0005536586 podman[99030]: 2025-11-26 12:38:58.551502401 +0000 UTC m=+0.030818184 container create 533960731c8228a265d024c1270f05ae9154905213ead3f6f23c17d0152e4729 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 26 07:38:58 np0005536586 systemd[1]: Started libpod-conmon-533960731c8228a265d024c1270f05ae9154905213ead3f6f23c17d0152e4729.scope.
Nov 26 07:38:58 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:58 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11851cf3f9d7d07410e752cc3170f1b9a6b56e693e7291e64f6f6f34b2b19bda/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:58 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11851cf3f9d7d07410e752cc3170f1b9a6b56e693e7291e64f6f6f34b2b19bda/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:58 np0005536586 podman[99030]: 2025-11-26 12:38:58.609667554 +0000 UTC m=+0.088983348 container init 533960731c8228a265d024c1270f05ae9154905213ead3f6f23c17d0152e4729 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 26 07:38:58 np0005536586 podman[99030]: 2025-11-26 12:38:58.614813481 +0000 UTC m=+0.094129255 container start 533960731c8228a265d024c1270f05ae9154905213ead3f6f23c17d0152e4729 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 07:38:58 np0005536586 podman[99030]: 2025-11-26 12:38:58.616146282 +0000 UTC m=+0.095462076 container attach 533960731c8228a265d024c1270f05ae9154905213ead3f6f23c17d0152e4729 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 07:38:58 np0005536586 podman[99030]: 2025-11-26 12:38:58.538618823 +0000 UTC m=+0.017934616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ipyiim", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 26 07:38:58 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ipyiim", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 26 07:38:58 np0005536586 podman[99107]: 2025-11-26 12:38:58.826902563 +0000 UTC m=+0.026814939 container create ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:58 np0005536586 systemd[1]: Started libpod-conmon-ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c.scope.
Nov 26 07:38:58 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:58 np0005536586 podman[99107]: 2025-11-26 12:38:58.874802092 +0000 UTC m=+0.074714458 container init ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:58 np0005536586 podman[99107]: 2025-11-26 12:38:58.879722905 +0000 UTC m=+0.079635271 container start ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:58 np0005536586 podman[99107]: 2025-11-26 12:38:58.880783624 +0000 UTC m=+0.080696010 container attach ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 07:38:58 np0005536586 systemd[1]: libpod-ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c.scope: Deactivated successfully.
Nov 26 07:38:58 np0005536586 gifted_babbage[99120]: 167 167
Nov 26 07:38:58 np0005536586 conmon[99120]: conmon ca6aac2d045fb2c3094f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c.scope/container/memory.events
Nov 26 07:38:58 np0005536586 podman[99107]: 2025-11-26 12:38:58.883830896 +0000 UTC m=+0.083743262 container died ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 07:38:58 np0005536586 systemd[1]: var-lib-containers-storage-overlay-35ad0c431e076e2ce868147d17ea21ae61a704ba3f73950772dbf1d120ca1a4a-merged.mount: Deactivated successfully.
Nov 26 07:38:58 np0005536586 podman[99107]: 2025-11-26 12:38:58.903358463 +0000 UTC m=+0.103270829 container remove ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:38:58 np0005536586 podman[99107]: 2025-11-26 12:38:58.816004797 +0000 UTC m=+0.015917163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:58 np0005536586 systemd[1]: libpod-conmon-ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c.scope: Deactivated successfully.
Nov 26 07:38:58 np0005536586 systemd[1]: Reloading.
Nov 26 07:38:58 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:38:59 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:38:59 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 07:38:59 np0005536586 vigorous_almeida[99069]: 
Nov 26 07:38:59 np0005536586 vigorous_almeida[99069]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 26 07:38:59 np0005536586 podman[99030]: 2025-11-26 12:38:59.126597114 +0000 UTC m=+0.605912887 container died 533960731c8228a265d024c1270f05ae9154905213ead3f6f23c17d0152e4729 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:59 np0005536586 systemd[1]: libpod-533960731c8228a265d024c1270f05ae9154905213ead3f6f23c17d0152e4729.scope: Deactivated successfully.
Nov 26 07:38:59 np0005536586 systemd[1]: var-lib-containers-storage-overlay-11851cf3f9d7d07410e752cc3170f1b9a6b56e693e7291e64f6f6f34b2b19bda-merged.mount: Deactivated successfully.
Nov 26 07:38:59 np0005536586 podman[99030]: 2025-11-26 12:38:59.157933202 +0000 UTC m=+0.637248975 container remove 533960731c8228a265d024c1270f05ae9154905213ead3f6f23c17d0152e4729 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:59 np0005536586 systemd[1]: libpod-conmon-533960731c8228a265d024c1270f05ae9154905213ead3f6f23c17d0152e4729.scope: Deactivated successfully.
Nov 26 07:38:59 np0005536586 systemd[1]: Reloading.
Nov 26 07:38:59 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:38:59 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:38:59 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 26 07:38:59 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 26 07:38:59 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 26 07:38:59 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 26 07:38:59 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 26 07:38:59 np0005536586 systemd[1]: Starting Ceph mds.cephfs.compute-0.ipyiim for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 07:38:59 np0005536586 podman[99284]: 2025-11-26 12:38:59.577286893 +0000 UTC m=+0.030201290 container create bc6bc48477a30b6c2763c5b823f3b844742d755476240bb0c5066a188454d173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mds-cephfs-compute-0-ipyiim, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:38:59 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1420a31f2086711cacd6bbcfc1c5a7288163b0bd526cce5e149d1b089e340b3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:59 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1420a31f2086711cacd6bbcfc1c5a7288163b0bd526cce5e149d1b089e340b3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:59 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1420a31f2086711cacd6bbcfc1c5a7288163b0bd526cce5e149d1b089e340b3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:59 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1420a31f2086711cacd6bbcfc1c5a7288163b0bd526cce5e149d1b089e340b3c/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.ipyiim supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:59 np0005536586 podman[99284]: 2025-11-26 12:38:59.617888001 +0000 UTC m=+0.070802398 container init bc6bc48477a30b6c2763c5b823f3b844742d755476240bb0c5066a188454d173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mds-cephfs-compute-0-ipyiim, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:38:59 np0005536586 podman[99284]: 2025-11-26 12:38:59.624348926 +0000 UTC m=+0.077263313 container start bc6bc48477a30b6c2763c5b823f3b844742d755476240bb0c5066a188454d173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mds-cephfs-compute-0-ipyiim, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 07:38:59 np0005536586 bash[99284]: bc6bc48477a30b6c2763c5b823f3b844742d755476240bb0c5066a188454d173
Nov 26 07:38:59 np0005536586 podman[99284]: 2025-11-26 12:38:59.565221969 +0000 UTC m=+0.018136366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:38:59 np0005536586 systemd[1]: Started Ceph mds.cephfs.compute-0.ipyiim for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 07:38:59 np0005536586 ceph-mon[74966]: Saving service rgw.rgw spec with placement compute-0
Nov 26 07:38:59 np0005536586 ceph-mon[74966]: Deploying daemon mds.cephfs.compute-0.ipyiim on compute-0
Nov 26 07:38:59 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 26 07:38:59 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:38:59 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:59 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:38:59 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:59 np0005536586 ceph-mds[99300]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 07:38:59 np0005536586 ceph-mds[99300]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 26 07:38:59 np0005536586 ceph-mds[99300]: main not setting numa affinity
Nov 26 07:38:59 np0005536586 ceph-mds[99300]: pidfile_write: ignore empty --pid-file
Nov 26 07:38:59 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mds-cephfs-compute-0-ipyiim[99296]: starting mds.cephfs.compute-0.ipyiim at 
Nov 26 07:38:59 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 26 07:38:59 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:59 np0005536586 ceph-mgr[75236]: [progress INFO root] complete: finished ev 13887e53-0170-459f-8503-4b4ba35e9b94 (Updating mds.cephfs deployment (+1 -> 1))
Nov 26 07:38:59 np0005536586 ceph-mgr[75236]: [progress INFO root] Completed event 13887e53-0170-459f-8503-4b4ba35e9b94 (Updating mds.cephfs deployment (+1 -> 1)) in 1 seconds
Nov 26 07:38:59 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 26 07:38:59 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:59 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 26 07:38:59 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:38:59 np0005536586 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim Updating MDS map to version 2 from mon.0
Nov 26 07:38:59 np0005536586 python3[99367]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:38:59 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v65: 132 pgs: 1 unknown, 1 active+clean+scrubbing, 130 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:38:59 np0005536586 podman[99440]: 2025-11-26 12:38:59.877744233 +0000 UTC m=+0.034178094 container create 7806e4fc1f0635b7be116012249cc87fce7e472c4bec53533a6518988f30472b (image=quay.io/ceph/ceph:v18, name=wonderful_buck, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 07:38:59 np0005536586 systemd[1]: Started libpod-conmon-7806e4fc1f0635b7be116012249cc87fce7e472c4bec53533a6518988f30472b.scope.
Nov 26 07:38:59 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:38:59 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e0044fee49a7dfe64b6c2bffc5cc85a8233d4bd9a2ca964972127cf4822413/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:59 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e0044fee49a7dfe64b6c2bffc5cc85a8233d4bd9a2ca964972127cf4822413/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:38:59 np0005536586 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 07:38:59 np0005536586 podman[99440]: 2025-11-26 12:38:59.936281678 +0000 UTC m=+0.092715560 container init 7806e4fc1f0635b7be116012249cc87fce7e472c4bec53533a6518988f30472b (image=quay.io/ceph/ceph:v18, name=wonderful_buck, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:38:59 np0005536586 podman[99440]: 2025-11-26 12:38:59.941639204 +0000 UTC m=+0.098073066 container start 7806e4fc1f0635b7be116012249cc87fce7e472c4bec53533a6518988f30472b (image=quay.io/ceph/ceph:v18, name=wonderful_buck, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:38:59 np0005536586 podman[99440]: 2025-11-26 12:38:59.943036075 +0000 UTC m=+0.099469937 container attach 7806e4fc1f0635b7be116012249cc87fce7e472c4bec53533a6518988f30472b (image=quay.io/ceph/ceph:v18, name=wonderful_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 07:38:59 np0005536586 podman[99440]: 2025-11-26 12:38:59.864910789 +0000 UTC m=+0.021344671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:39:00 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 26 07:39:00 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 32 pg[8.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:00 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 26 07:39:00 np0005536586 podman[99586]: 2025-11-26 12:39:00.30989548 +0000 UTC m=+0.038871099 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:39:00 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 07:39:00 np0005536586 wonderful_buck[99481]: 
Nov 26 07:39:00 np0005536586 wonderful_buck[99481]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 26 07:39:00 np0005536586 podman[99586]: 2025-11-26 12:39:00.392053955 +0000 UTC m=+0.121029555 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 26 07:39:00 np0005536586 systemd[1]: libpod-7806e4fc1f0635b7be116012249cc87fce7e472c4bec53533a6518988f30472b.scope: Deactivated successfully.
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 26 07:39:00 np0005536586 podman[99440]: 2025-11-26 12:39:00.399170135 +0000 UTC m=+0.555603997 container died 7806e4fc1f0635b7be116012249cc87fce7e472c4bec53533a6518988f30472b (image=quay.io/ceph/ceph:v18, name=wonderful_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 07:39:00 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 33 pg[8.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:00 np0005536586 systemd[1]: var-lib-containers-storage-overlay-17e0044fee49a7dfe64b6c2bffc5cc85a8233d4bd9a2ca964972127cf4822413-merged.mount: Deactivated successfully.
Nov 26 07:39:00 np0005536586 podman[99440]: 2025-11-26 12:39:00.428265791 +0000 UTC m=+0.584699653 container remove 7806e4fc1f0635b7be116012249cc87fce7e472c4bec53533a6518988f30472b (image=quay.io/ceph/ceph:v18, name=wonderful_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 07:39:00 np0005536586 systemd[1]: libpod-conmon-7806e4fc1f0635b7be116012249cc87fce7e472c4bec53533a6518988f30472b.scope: Deactivated successfully.
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).mds e3 new map
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-26T12:38:49.414687+0000#012modified#0112025-11-26T12:38:49.414741+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.ipyiim{-1:14265} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.100:6814/1310645866,v1:192.168.122.100:6815/1310645866] compat {c=[1],r=[1],i=[7ff]}]
Nov 26 07:39:00 np0005536586 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim Updating MDS map to version 3 from mon.0
Nov 26 07:39:00 np0005536586 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim Monitors have assigned me to become a standby.
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1310645866,v1:192.168.122.100:6815/1310645866] up:boot
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/1310645866,v1:192.168.122.100:6815/1310645866] as mds.0
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.ipyiim assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.ipyiim"} v 0) v1
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ipyiim"}]: dispatch
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).mds e3 all = 0
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).mds e4 new map
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-26T12:38:49.414687+0000#012modified#0112025-11-26T12:39:00.680673+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14265}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.ipyiim{0:14265} state up:creating seq 1 join_fscid=1 addr [v2:192.168.122.100:6814/1310645866,v1:192.168.122.100:6815/1310645866] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 26 07:39:00 np0005536586 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim Updating MDS map to version 4 from mon.0
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.ipyiim=up:creating}
Nov 26 07:39:00 np0005536586 ceph-mds[99300]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 26 07:39:00 np0005536586 ceph-mds[99300]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 26 07:39:00 np0005536586 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x1
Nov 26 07:39:00 np0005536586 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x100
Nov 26 07:39:00 np0005536586 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x600
Nov 26 07:39:00 np0005536586 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x601
Nov 26 07:39:00 np0005536586 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x602
Nov 26 07:39:00 np0005536586 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x603
Nov 26 07:39:00 np0005536586 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x604
Nov 26 07:39:00 np0005536586 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x605
Nov 26 07:39:00 np0005536586 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x606
Nov 26 07:39:00 np0005536586 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x607
Nov 26 07:39:00 np0005536586 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x608
Nov 26 07:39:00 np0005536586 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x609
Nov 26 07:39:00 np0005536586 ceph-mds[99300]: mds.0.4 creating_done
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.ipyiim is now active in filesystem cephfs as rank 0
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:00 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 9f3fcf6d-08b7-4fa5-b928-de72eb959739 does not exist
Nov 26 07:39:00 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev d8103df1-b2c0-4cf6-b2b7-65f22ebeda99 does not exist
Nov 26 07:39:00 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 6529d20d-66d3-4946-a386-76f2e70990b0 does not exist
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:39:00 np0005536586 ceph-mgr[75236]: [progress INFO root] Writing back 9 completed events
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:39:01 np0005536586 python3[99864]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:39:01 np0005536586 podman[99893]: 2025-11-26 12:39:01.241800681 +0000 UTC m=+0.032694438 container create b7eefb170f55b1f333de8e9bb1f9f110b42c19f3bf12f75e8db2dbbc737aa5ea (image=quay.io/ceph/ceph:v18, name=heuristic_haslett, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:39:01 np0005536586 podman[99905]: 2025-11-26 12:39:01.268339138 +0000 UTC m=+0.034071734 container create e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:39:01 np0005536586 systemd[1]: Started libpod-conmon-b7eefb170f55b1f333de8e9bb1f9f110b42c19f3bf12f75e8db2dbbc737aa5ea.scope.
Nov 26 07:39:01 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:01 np0005536586 systemd[1]: Started libpod-conmon-e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b.scope.
Nov 26 07:39:01 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b64db318c664053bd5fb4eadd45955761cee1de256374dfd64da8a19c4af38d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:01 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b64db318c664053bd5fb4eadd45955761cee1de256374dfd64da8a19c4af38d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:01 np0005536586 podman[99893]: 2025-11-26 12:39:01.299036824 +0000 UTC m=+0.089930582 container init b7eefb170f55b1f333de8e9bb1f9f110b42c19f3bf12f75e8db2dbbc737aa5ea (image=quay.io/ceph/ceph:v18, name=heuristic_haslett, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 07:39:01 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:01 np0005536586 podman[99893]: 2025-11-26 12:39:01.306562725 +0000 UTC m=+0.097456483 container start b7eefb170f55b1f333de8e9bb1f9f110b42c19f3bf12f75e8db2dbbc737aa5ea (image=quay.io/ceph/ceph:v18, name=heuristic_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 07:39:01 np0005536586 podman[99893]: 2025-11-26 12:39:01.308509483 +0000 UTC m=+0.099403251 container attach b7eefb170f55b1f333de8e9bb1f9f110b42c19f3bf12f75e8db2dbbc737aa5ea (image=quay.io/ceph/ceph:v18, name=heuristic_haslett, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:39:01 np0005536586 podman[99905]: 2025-11-26 12:39:01.310188267 +0000 UTC m=+0.075920853 container init e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shaw, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:39:01 np0005536586 podman[99905]: 2025-11-26 12:39:01.314532072 +0000 UTC m=+0.080264658 container start e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shaw, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 07:39:01 np0005536586 podman[99905]: 2025-11-26 12:39:01.31599583 +0000 UTC m=+0.081728417 container attach e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shaw, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:39:01 np0005536586 reverent_shaw[99924]: 167 167
Nov 26 07:39:01 np0005536586 systemd[1]: libpod-e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b.scope: Deactivated successfully.
Nov 26 07:39:01 np0005536586 conmon[99924]: conmon e8305df17682862bff5d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b.scope/container/memory.events
Nov 26 07:39:01 np0005536586 podman[99905]: 2025-11-26 12:39:01.318735322 +0000 UTC m=+0.084467908 container died e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 07:39:01 np0005536586 podman[99893]: 2025-11-26 12:39:01.225883496 +0000 UTC m=+0.016777274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:39:01 np0005536586 systemd[1]: var-lib-containers-storage-overlay-d583b8691a968f670fde82d7e363244f10ad073038a305eb59f19c720036e441-merged.mount: Deactivated successfully.
Nov 26 07:39:01 np0005536586 podman[99905]: 2025-11-26 12:39:01.340501608 +0000 UTC m=+0.106234194 container remove e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:39:01 np0005536586 podman[99905]: 2025-11-26 12:39:01.257435753 +0000 UTC m=+0.023168358 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:39:01 np0005536586 systemd[1]: libpod-conmon-e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b.scope: Deactivated successfully.
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 26 07:39:01 np0005536586 ansible-async_wrapper.py[98388]: Done in kid B.
Nov 26 07:39:01 np0005536586 podman[99947]: 2025-11-26 12:39:01.462008601 +0000 UTC m=+0.028845787 container create c6eb4f421bebcb3e4b69ff9652dd1b0286cf1b5905f07e034ee5fee7eaf11029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 07:39:01 np0005536586 systemd[1]: Started libpod-conmon-c6eb4f421bebcb3e4b69ff9652dd1b0286cf1b5905f07e034ee5fee7eaf11029.scope.
Nov 26 07:39:01 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:01 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8c3dde29f5f3f52bf5e94b380c06ad774a7e538bda269e4e6f3cb40c25568/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:01 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8c3dde29f5f3f52bf5e94b380c06ad774a7e538bda269e4e6f3cb40c25568/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:01 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8c3dde29f5f3f52bf5e94b380c06ad774a7e538bda269e4e6f3cb40c25568/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:01 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8c3dde29f5f3f52bf5e94b380c06ad774a7e538bda269e4e6f3cb40c25568/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:01 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8c3dde29f5f3f52bf5e94b380c06ad774a7e538bda269e4e6f3cb40c25568/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:01 np0005536586 podman[99947]: 2025-11-26 12:39:01.529617199 +0000 UTC m=+0.096454395 container init c6eb4f421bebcb3e4b69ff9652dd1b0286cf1b5905f07e034ee5fee7eaf11029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:39:01 np0005536586 podman[99947]: 2025-11-26 12:39:01.534874445 +0000 UTC m=+0.101711621 container start c6eb4f421bebcb3e4b69ff9652dd1b0286cf1b5905f07e034ee5fee7eaf11029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 07:39:01 np0005536586 podman[99947]: 2025-11-26 12:39:01.536340037 +0000 UTC m=+0.103177224 container attach c6eb4f421bebcb3e4b69ff9652dd1b0286cf1b5905f07e034ee5fee7eaf11029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:39:01 np0005536586 podman[99947]: 2025-11-26 12:39:01.449718822 +0000 UTC m=+0.016556019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:39:01 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Nov 26 07:39:01 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Nov 26 07:39:01 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 26 07:39:01 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: daemon mds.cephfs.compute-0.ipyiim assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: Cluster is now healthy
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: daemon mds.cephfs.compute-0.ipyiim is now active in filesystem cephfs as rank 0
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).mds e5 new map
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-26T12:38:49.414687+0000#012modified#0112025-11-26T12:39:01.682081+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14265}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.ipyiim{0:14265} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/1310645866,v1:192.168.122.100:6815/1310645866] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 26 07:39:01 np0005536586 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim Updating MDS map to version 5 from mon.0
Nov 26 07:39:01 np0005536586 ceph-mds[99300]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 26 07:39:01 np0005536586 ceph-mds[99300]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 26 07:39:01 np0005536586 ceph-mds[99300]: mds.0.4 recovery_done -- successful recovery!
Nov 26 07:39:01 np0005536586 ceph-mds[99300]: mds.0.4 active_start
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1310645866,v1:192.168.122.100:6815/1310645866] up:active
Nov 26 07:39:01 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.ipyiim=up:active}
Nov 26 07:39:01 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 07:39:01 np0005536586 heuristic_haslett[99918]: 
Nov 26 07:39:01 np0005536586 heuristic_haslett[99918]: [{"container_id": "3e7332a87e08", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.49%", "created": "2025-11-26T12:37:54.342893Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-26T12:37:54.378128Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T12:39:00.825715Z", "memory_usage": 11639193, "ports": [], "service_name": "crash", "started": "2025-11-26T12:37:54.272378Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@crash.compute-0", "version": "18.2.7"}, {"container_id": "bc6bc48477a3", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "7.80%", "created": "2025-11-26T12:38:59.630860Z", "daemon_id": "cephfs.compute-0.ipyiim", "daemon_name": "mds.cephfs.compute-0.ipyiim", "daemon_type": "mds", "events": ["2025-11-26T12:38:59.661416Z daemon:mds.cephfs.compute-0.ipyiim [INFO] \"Deployed mds.cephfs.compute-0.ipyiim on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T12:39:00.826002Z", "memory_usage": 12729712, "ports": [], "service_name": "mds.cephfs", "started": "2025-11-26T12:38:59.568805Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@mds.cephfs.compute-0.ipyiim", "version": "18.2.7"}, {"container_id": "c06d21624ca8", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "31.64%", "created": "2025-11-26T12:36:57.403651Z", "daemon_id": "compute-0.whkbdn", "daemon_name": "mgr.compute-0.whkbdn", "daemon_type": "mgr", "events": ["2025-11-26T12:37:57.682828Z daemon:mgr.compute-0.whkbdn [INFO] \"Reconfigured mgr.compute-0.whkbdn on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T12:39:00.825659Z", "memory_usage": 548719820, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-26T12:36:57.345673Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@mgr.compute-0.whkbdn", "version": "18.2.7"}, {"container_id": "ba65664ab41f", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.71%", "created": "2025-11-26T12:36:53.883618Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-26T12:37:57.142211Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T12:39:00.825575Z", "memory_request": 2147483648, "memory_usage": 38252052, "ports": [], "service_name": "mon", "started": "2025-11-26T12:36:55.819213Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@mon.compute-0", "version": "18.2.7"}, {"container_id": "9981961b7997", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.17%", "created": "2025-11-26T12:38:15.569150Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-26T12:38:15.599429Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T12:39:00.825791Z", "memory_request": 4294967296, "memory_usage": 59653488, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-26T12:38:15.504837Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@osd.0", "version": "18.2.7"}, {"container_id": "7fe95a8b384c", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.34%", "created": "2025-11-26T12:38:18.844206Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-26T12:38:18.932341Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T12:39:00.825848Z", "memory_request": 4294967296, "memory_usage": 63491276, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-26T12:38:18.695428Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@osd.1", "version": "18.2.7"}, {"container_id": "fad0efe7fb69", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.41%", "created": "2025-11-26T12:38:22.364266Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-26T12:38:22.445960Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T12:39:00.825898Z", "memory_request": 4294967296, "memory_usage": 64088965, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-26T12:38:22.211172Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@osd.2", "version": "18.2.7"}, {"container_id": "24825469580c", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.83%", "created": "2025-11-26T12:38:58.342848Z", "daemon_id": "rgw.compute-0.cpfqrx", "daemon_name": "rgw.rgw.compute-0.cpfqrx", "daemon_type": "rgw", "events": ["2025-11-26T
Nov 26 07:39:01 np0005536586 systemd[1]: libpod-b7eefb170f55b1f333de8e9bb1f9f110b42c19f3bf12f75e8db2dbbc737aa5ea.scope: Deactivated successfully.
Nov 26 07:39:01 np0005536586 podman[99893]: 2025-11-26 12:39:01.767387012 +0000 UTC m=+0.558280780 container died b7eefb170f55b1f333de8e9bb1f9f110b42c19f3bf12f75e8db2dbbc737aa5ea (image=quay.io/ceph/ceph:v18, name=heuristic_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 07:39:01 np0005536586 rsyslogd[962]: message too long (8588) with configured size 8096, begin of message is: [{"container_id": "3e7332a87e08", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 07:39:01 np0005536586 systemd[1]: var-lib-containers-storage-overlay-1b64db318c664053bd5fb4eadd45955761cee1de256374dfd64da8a19c4af38d-merged.mount: Deactivated successfully.
Nov 26 07:39:01 np0005536586 podman[99893]: 2025-11-26 12:39:01.790498422 +0000 UTC m=+0.581392180 container remove b7eefb170f55b1f333de8e9bb1f9f110b42c19f3bf12f75e8db2dbbc737aa5ea (image=quay.io/ceph/ceph:v18, name=heuristic_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 07:39:01 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 34 pg[9.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:01 np0005536586 systemd[1]: libpod-conmon-b7eefb170f55b1f333de8e9bb1f9f110b42c19f3bf12f75e8db2dbbc737aa5ea.scope: Deactivated successfully.
Nov 26 07:39:01 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v68: 133 pgs: 1 active+clean+scrubbing, 2 unknown, 130 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s wr, 7 op/s
Nov 26 07:39:02 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 26 07:39:02 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 26 07:39:02 np0005536586 elated_bassi[99960]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:39:02 np0005536586 elated_bassi[99960]: --> relative data size: 1.0
Nov 26 07:39:02 np0005536586 elated_bassi[99960]: --> All data devices are unavailable
Nov 26 07:39:02 np0005536586 systemd[1]: libpod-c6eb4f421bebcb3e4b69ff9652dd1b0286cf1b5905f07e034ee5fee7eaf11029.scope: Deactivated successfully.
Nov 26 07:39:02 np0005536586 podman[99947]: 2025-11-26 12:39:02.366210368 +0000 UTC m=+0.933047545 container died c6eb4f421bebcb3e4b69ff9652dd1b0286cf1b5905f07e034ee5fee7eaf11029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bassi, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 07:39:02 np0005536586 systemd[1]: var-lib-containers-storage-overlay-1fe8c3dde29f5f3f52bf5e94b380c06ad774a7e538bda269e4e6f3cb40c25568-merged.mount: Deactivated successfully.
Nov 26 07:39:02 np0005536586 podman[99947]: 2025-11-26 12:39:02.396674514 +0000 UTC m=+0.963511689 container remove c6eb4f421bebcb3e4b69ff9652dd1b0286cf1b5905f07e034ee5fee7eaf11029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bassi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 07:39:02 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 26 07:39:02 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 26 07:39:02 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 26 07:39:02 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 26 07:39:02 np0005536586 systemd[1]: libpod-conmon-c6eb4f421bebcb3e4b69ff9652dd1b0286cf1b5905f07e034ee5fee7eaf11029.scope: Deactivated successfully.
Nov 26 07:39:02 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 35 pg[9.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:02 np0005536586 python3[100134]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:39:02 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 26 07:39:02 np0005536586 podman[100160]: 2025-11-26 12:39:02.713166097 +0000 UTC m=+0.034171332 container create b90f225df4f8fb45d1b27f642f3341031d70849db70cec9688f233fbf28a2075 (image=quay.io/ceph/ceph:v18, name=modest_sammet, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:39:02 np0005536586 systemd[1]: Started libpod-conmon-b90f225df4f8fb45d1b27f642f3341031d70849db70cec9688f233fbf28a2075.scope.
Nov 26 07:39:02 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:02 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c820306a69867f3dc7b5b5a3560983b40aad47d6409ef2e3d5656586f9c9ddea/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:02 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c820306a69867f3dc7b5b5a3560983b40aad47d6409ef2e3d5656586f9c9ddea/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:02 np0005536586 podman[100160]: 2025-11-26 12:39:02.761430904 +0000 UTC m=+0.082436140 container init b90f225df4f8fb45d1b27f642f3341031d70849db70cec9688f233fbf28a2075 (image=quay.io/ceph/ceph:v18, name=modest_sammet, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:39:02 np0005536586 podman[100160]: 2025-11-26 12:39:02.767816016 +0000 UTC m=+0.088821251 container start b90f225df4f8fb45d1b27f642f3341031d70849db70cec9688f233fbf28a2075 (image=quay.io/ceph/ceph:v18, name=modest_sammet, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:39:02 np0005536586 podman[100160]: 2025-11-26 12:39:02.769004215 +0000 UTC m=+0.090009480 container attach b90f225df4f8fb45d1b27f642f3341031d70849db70cec9688f233fbf28a2075 (image=quay.io/ceph/ceph:v18, name=modest_sammet, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:39:02 np0005536586 podman[100160]: 2025-11-26 12:39:02.701006734 +0000 UTC m=+0.022011989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:39:02 np0005536586 podman[100206]: 2025-11-26 12:39:02.850499752 +0000 UTC m=+0.028380880 container create 279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_beaver, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:39:02 np0005536586 systemd[1]: Started libpod-conmon-279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d.scope.
Nov 26 07:39:02 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:02 np0005536586 podman[100206]: 2025-11-26 12:39:02.894674444 +0000 UTC m=+0.072555571 container init 279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_beaver, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 07:39:02 np0005536586 podman[100206]: 2025-11-26 12:39:02.899804149 +0000 UTC m=+0.077685287 container start 279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_beaver, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 07:39:02 np0005536586 podman[100206]: 2025-11-26 12:39:02.900888372 +0000 UTC m=+0.078769510 container attach 279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_beaver, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 07:39:02 np0005536586 systemd[1]: libpod-279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d.scope: Deactivated successfully.
Nov 26 07:39:02 np0005536586 great_beaver[100219]: 167 167
Nov 26 07:39:02 np0005536586 conmon[100219]: conmon 279cbca14096f9ee3abd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d.scope/container/memory.events
Nov 26 07:39:02 np0005536586 podman[100206]: 2025-11-26 12:39:02.903880841 +0000 UTC m=+0.081761979 container died 279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_beaver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:39:02 np0005536586 systemd[1]: var-lib-containers-storage-overlay-688167e4ad44c393dcd75ed6cdde94ce62b985be8b7058815c79def1a8cadf01-merged.mount: Deactivated successfully.
Nov 26 07:39:02 np0005536586 podman[100206]: 2025-11-26 12:39:02.925105546 +0000 UTC m=+0.102986684 container remove 279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:39:02 np0005536586 podman[100206]: 2025-11-26 12:39:02.837548087 +0000 UTC m=+0.015429245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:39:02 np0005536586 systemd[1]: libpod-conmon-279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d.scope: Deactivated successfully.
Nov 26 07:39:03 np0005536586 podman[100241]: 2025-11-26 12:39:03.036398191 +0000 UTC m=+0.028816092 container create df151ad18e7967e9e0032dd4afe9cf67e3bd143ab39904db5f89f5b3a45eb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 07:39:03 np0005536586 systemd[1]: Started libpod-conmon-df151ad18e7967e9e0032dd4afe9cf67e3bd143ab39904db5f89f5b3a45eb6b4.scope.
Nov 26 07:39:03 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:03 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea2c205db8c9fb184e1f3fd0749663b7ff4a66256ba4bad365d709a841505bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:03 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea2c205db8c9fb184e1f3fd0749663b7ff4a66256ba4bad365d709a841505bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:03 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea2c205db8c9fb184e1f3fd0749663b7ff4a66256ba4bad365d709a841505bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:03 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea2c205db8c9fb184e1f3fd0749663b7ff4a66256ba4bad365d709a841505bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:03 np0005536586 podman[100241]: 2025-11-26 12:39:03.095997917 +0000 UTC m=+0.088415818 container init df151ad18e7967e9e0032dd4afe9cf67e3bd143ab39904db5f89f5b3a45eb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kalam, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 07:39:03 np0005536586 podman[100241]: 2025-11-26 12:39:03.101304557 +0000 UTC m=+0.093722457 container start df151ad18e7967e9e0032dd4afe9cf67e3bd143ab39904db5f89f5b3a45eb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kalam, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 07:39:03 np0005536586 podman[100241]: 2025-11-26 12:39:03.104821405 +0000 UTC m=+0.097239325 container attach df151ad18e7967e9e0032dd4afe9cf67e3bd143ab39904db5f89f5b3a45eb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kalam, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 07:39:03 np0005536586 podman[100241]: 2025-11-26 12:39:03.025437006 +0000 UTC m=+0.017854926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:39:03 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 26 07:39:03 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2324126989' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 07:39:03 np0005536586 modest_sammet[100184]: 
Nov 26 07:39:03 np0005536586 modest_sammet[100184]: {"fsid":"f7d7fe93-41e5-51c4-b72d-63b38686102e","health":{"status":"HEALTH_WARN","checks":{"POOL_APP_NOT_ENABLED":{"severity":"HEALTH_WARN","summary":{"message":"1 pool(s) do not have an application enabled","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":127,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":35,"num_osds":3,"num_up_osds":3,"osd_up_since":1764160707,"num_in_osds":3,"osd_in_since":1764160688,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":130},{"state_name":"unknown","count":2},{"state_name":"active+clean+scrubbing","count":1}],"num_pgs":133,"num_pools":9,"num_objects":23,"data_bytes":461642,"bytes_used":83861504,"bytes_avail":64328065024,"bytes_total":64411926528,"unknown_pgs_ratio":0.015037594363093376,"write_bytes_sec":2388,"read_op_per_sec":0,"write_op_per_sec":7},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.ipyiim","status":"up:active","gid":14265}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-26T12:38:37.846513+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 26 07:39:03 np0005536586 systemd[1]: libpod-b90f225df4f8fb45d1b27f642f3341031d70849db70cec9688f233fbf28a2075.scope: Deactivated successfully.
Nov 26 07:39:03 np0005536586 podman[100160]: 2025-11-26 12:39:03.27033411 +0000 UTC m=+0.591339355 container died b90f225df4f8fb45d1b27f642f3341031d70849db70cec9688f233fbf28a2075 (image=quay.io/ceph/ceph:v18, name=modest_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 07:39:03 np0005536586 podman[100160]: 2025-11-26 12:39:03.292469671 +0000 UTC m=+0.613474906 container remove b90f225df4f8fb45d1b27f642f3341031d70849db70cec9688f233fbf28a2075 (image=quay.io/ceph/ceph:v18, name=modest_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 07:39:03 np0005536586 systemd[1]: libpod-conmon-b90f225df4f8fb45d1b27f642f3341031d70849db70cec9688f233fbf28a2075.scope: Deactivated successfully.
Nov 26 07:39:03 np0005536586 systemd[1]: var-lib-containers-storage-overlay-c820306a69867f3dc7b5b5a3560983b40aad47d6409ef2e3d5656586f9c9ddea-merged.mount: Deactivated successfully.
Nov 26 07:39:03 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 26 07:39:03 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 26 07:39:03 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 26 07:39:03 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 26 07:39:03 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 26 07:39:03 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 36 pg[10.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:03 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 26 07:39:03 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 26 07:39:03 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]: {
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:    "0": [
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:        {
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "devices": [
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "/dev/loop3"
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            ],
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "lv_name": "ceph_lv0",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "lv_size": "21470642176",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "name": "ceph_lv0",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "tags": {
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.cluster_name": "ceph",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.crush_device_class": "",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.encrypted": "0",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.osd_id": "0",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.type": "block",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.vdo": "0"
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            },
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "type": "block",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "vg_name": "ceph_vg0"
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:        }
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:    ],
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:    "1": [
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:        {
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "devices": [
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "/dev/loop4"
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            ],
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "lv_name": "ceph_lv1",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "lv_size": "21470642176",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "name": "ceph_lv1",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "tags": {
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.cluster_name": "ceph",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.crush_device_class": "",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.encrypted": "0",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.osd_id": "1",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.type": "block",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.vdo": "0"
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            },
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "type": "block",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "vg_name": "ceph_vg1"
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:        }
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:    ],
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:    "2": [
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:        {
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "devices": [
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "/dev/loop5"
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            ],
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "lv_name": "ceph_lv2",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "lv_size": "21470642176",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "name": "ceph_lv2",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "tags": {
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.cluster_name": "ceph",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.crush_device_class": "",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.encrypted": "0",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.osd_id": "2",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.type": "block",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:                "ceph.vdo": "0"
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            },
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "type": "block",
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:            "vg_name": "ceph_vg2"
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:        }
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]:    ]
Nov 26 07:39:03 np0005536586 distracted_kalam[100273]: }
Nov 26 07:39:03 np0005536586 systemd[1]: libpod-df151ad18e7967e9e0032dd4afe9cf67e3bd143ab39904db5f89f5b3a45eb6b4.scope: Deactivated successfully.
Nov 26 07:39:03 np0005536586 podman[100293]: 2025-11-26 12:39:03.778003779 +0000 UTC m=+0.017976734 container died df151ad18e7967e9e0032dd4afe9cf67e3bd143ab39904db5f89f5b3a45eb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 07:39:03 np0005536586 systemd[1]: var-lib-containers-storage-overlay-1ea2c205db8c9fb184e1f3fd0749663b7ff4a66256ba4bad365d709a841505bc-merged.mount: Deactivated successfully.
Nov 26 07:39:03 np0005536586 podman[100293]: 2025-11-26 12:39:03.806558116 +0000 UTC m=+0.046531071 container remove df151ad18e7967e9e0032dd4afe9cf67e3bd143ab39904db5f89f5b3a45eb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 07:39:03 np0005536586 systemd[1]: libpod-conmon-df151ad18e7967e9e0032dd4afe9cf67e3bd143ab39904db5f89f5b3a45eb6b4.scope: Deactivated successfully.
Nov 26 07:39:03 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v71: 134 pgs: 1 creating+peering, 1 active+clean+scrubbing, 2 unknown, 130 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Nov 26 07:39:04 np0005536586 python3[100430]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:39:04 np0005536586 podman[100453]: 2025-11-26 12:39:04.20829393 +0000 UTC m=+0.032955862 container create e9abb9650e3785d57c57aece37c5c20a8fbf2fac70ea094c608efa5732ddc545 (image=quay.io/ceph/ceph:v18, name=practical_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 26 07:39:04 np0005536586 systemd[1]: Started libpod-conmon-e9abb9650e3785d57c57aece37c5c20a8fbf2fac70ea094c608efa5732ddc545.scope.
Nov 26 07:39:04 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:04 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df6f5d1da2f4dcd6ae084a199cc6176351390cadd2734807b5dac107bc7cd43f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:04 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df6f5d1da2f4dcd6ae084a199cc6176351390cadd2734807b5dac107bc7cd43f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:04 np0005536586 podman[100453]: 2025-11-26 12:39:04.257971461 +0000 UTC m=+0.082633414 container init e9abb9650e3785d57c57aece37c5c20a8fbf2fac70ea094c608efa5732ddc545 (image=quay.io/ceph/ceph:v18, name=practical_mccarthy, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:39:04 np0005536586 podman[100453]: 2025-11-26 12:39:04.262825757 +0000 UTC m=+0.087487690 container start e9abb9650e3785d57c57aece37c5c20a8fbf2fac70ea094c608efa5732ddc545 (image=quay.io/ceph/ceph:v18, name=practical_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:39:04 np0005536586 podman[100453]: 2025-11-26 12:39:04.26402635 +0000 UTC m=+0.088688303 container attach e9abb9650e3785d57c57aece37c5c20a8fbf2fac70ea094c608efa5732ddc545 (image=quay.io/ceph/ceph:v18, name=practical_mccarthy, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 07:39:04 np0005536586 podman[100475]: 2025-11-26 12:39:04.266693795 +0000 UTC m=+0.029724653 container create 6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 07:39:04 np0005536586 podman[100453]: 2025-11-26 12:39:04.192198069 +0000 UTC m=+0.016860023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:39:04 np0005536586 systemd[1]: Started libpod-conmon-6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74.scope.
Nov 26 07:39:04 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:04 np0005536586 podman[100475]: 2025-11-26 12:39:04.328729904 +0000 UTC m=+0.091760782 container init 6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:39:04 np0005536586 podman[100475]: 2025-11-26 12:39:04.333621471 +0000 UTC m=+0.096652329 container start 6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:39:04 np0005536586 podman[100475]: 2025-11-26 12:39:04.334858953 +0000 UTC m=+0.097889830 container attach 6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 07:39:04 np0005536586 determined_leavitt[100493]: 167 167
Nov 26 07:39:04 np0005536586 systemd[1]: libpod-6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74.scope: Deactivated successfully.
Nov 26 07:39:04 np0005536586 conmon[100493]: conmon 6563b87f683c8f8f6234 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74.scope/container/memory.events
Nov 26 07:39:04 np0005536586 podman[100475]: 2025-11-26 12:39:04.337950318 +0000 UTC m=+0.100981175 container died 6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 07:39:04 np0005536586 systemd[1]: var-lib-containers-storage-overlay-b1ecbbbb36c737d6d2bfc6ff2cb6567a8a6f18e15a055643d9e024ddb62072f1-merged.mount: Deactivated successfully.
Nov 26 07:39:04 np0005536586 podman[100475]: 2025-11-26 12:39:04.254335248 +0000 UTC m=+0.017366125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:39:04 np0005536586 podman[100475]: 2025-11-26 12:39:04.357652495 +0000 UTC m=+0.120683351 container remove 6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 07:39:04 np0005536586 systemd[1]: libpod-conmon-6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74.scope: Deactivated successfully.
Nov 26 07:39:04 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 26 07:39:04 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 26 07:39:04 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 26 07:39:04 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 26 07:39:04 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 37 pg[10.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:04 np0005536586 podman[100515]: 2025-11-26 12:39:04.499744977 +0000 UTC m=+0.031411842 container create 6a8f746e08d0660c6b3afce3f7b369b534a4932d8c049c7bff051ffb390dbc5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_allen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:39:04 np0005536586 systemd[1]: Started libpod-conmon-6a8f746e08d0660c6b3afce3f7b369b534a4932d8c049c7bff051ffb390dbc5e.scope.
Nov 26 07:39:04 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:04 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e0f2721c0500418cde0276b1dda1e8b1a977a94309983a40be0e97c204d489/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:04 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e0f2721c0500418cde0276b1dda1e8b1a977a94309983a40be0e97c204d489/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:04 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e0f2721c0500418cde0276b1dda1e8b1a977a94309983a40be0e97c204d489/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:04 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e0f2721c0500418cde0276b1dda1e8b1a977a94309983a40be0e97c204d489/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:04 np0005536586 podman[100515]: 2025-11-26 12:39:04.559258722 +0000 UTC m=+0.090925607 container init 6a8f746e08d0660c6b3afce3f7b369b534a4932d8c049c7bff051ffb390dbc5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 07:39:04 np0005536586 podman[100515]: 2025-11-26 12:39:04.567028453 +0000 UTC m=+0.098695318 container start 6a8f746e08d0660c6b3afce3f7b369b534a4932d8c049c7bff051ffb390dbc5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 07:39:04 np0005536586 podman[100515]: 2025-11-26 12:39:04.568149996 +0000 UTC m=+0.099816862 container attach 6a8f746e08d0660c6b3afce3f7b369b534a4932d8c049c7bff051ffb390dbc5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 07:39:04 np0005536586 podman[100515]: 2025-11-26 12:39:04.489092283 +0000 UTC m=+0.020759169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:39:04 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 26 07:39:04 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 26 07:39:04 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 26 07:39:04 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 07:39:04 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/219168040' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 07:39:04 np0005536586 practical_mccarthy[100477]: 
Nov 26 07:39:04 np0005536586 practical_mccarthy[100477]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.cpfqrx","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 26 07:39:04 np0005536586 systemd[1]: libpod-e9abb9650e3785d57c57aece37c5c20a8fbf2fac70ea094c608efa5732ddc545.scope: Deactivated successfully.
Nov 26 07:39:04 np0005536586 podman[100568]: 2025-11-26 12:39:04.766679036 +0000 UTC m=+0.016978574 container died e9abb9650e3785d57c57aece37c5c20a8fbf2fac70ea094c608efa5732ddc545 (image=quay.io/ceph/ceph:v18, name=practical_mccarthy, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:39:04 np0005536586 systemd[1]: var-lib-containers-storage-overlay-df6f5d1da2f4dcd6ae084a199cc6176351390cadd2734807b5dac107bc7cd43f-merged.mount: Deactivated successfully.
Nov 26 07:39:04 np0005536586 podman[100568]: 2025-11-26 12:39:04.787579339 +0000 UTC m=+0.037878868 container remove e9abb9650e3785d57c57aece37c5c20a8fbf2fac70ea094c608efa5732ddc545 (image=quay.io/ceph/ceph:v18, name=practical_mccarthy, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 07:39:04 np0005536586 systemd[1]: libpod-conmon-e9abb9650e3785d57c57aece37c5c20a8fbf2fac70ea094c608efa5732ddc545.scope: Deactivated successfully.
Nov 26 07:39:05 np0005536586 angry_allen[100542]: {
Nov 26 07:39:05 np0005536586 angry_allen[100542]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:39:05 np0005536586 angry_allen[100542]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:39:05 np0005536586 angry_allen[100542]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:39:05 np0005536586 angry_allen[100542]:        "osd_id": 1,
Nov 26 07:39:05 np0005536586 angry_allen[100542]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:39:05 np0005536586 angry_allen[100542]:        "type": "bluestore"
Nov 26 07:39:05 np0005536586 angry_allen[100542]:    },
Nov 26 07:39:05 np0005536586 angry_allen[100542]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:39:05 np0005536586 angry_allen[100542]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:39:05 np0005536586 angry_allen[100542]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:39:05 np0005536586 angry_allen[100542]:        "osd_id": 2,
Nov 26 07:39:05 np0005536586 angry_allen[100542]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:39:05 np0005536586 angry_allen[100542]:        "type": "bluestore"
Nov 26 07:39:05 np0005536586 angry_allen[100542]:    },
Nov 26 07:39:05 np0005536586 angry_allen[100542]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:39:05 np0005536586 angry_allen[100542]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:39:05 np0005536586 angry_allen[100542]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:39:05 np0005536586 angry_allen[100542]:        "osd_id": 0,
Nov 26 07:39:05 np0005536586 angry_allen[100542]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:39:05 np0005536586 angry_allen[100542]:        "type": "bluestore"
Nov 26 07:39:05 np0005536586 angry_allen[100542]:    }
Nov 26 07:39:05 np0005536586 angry_allen[100542]: }
Nov 26 07:39:05 np0005536586 systemd[1]: libpod-6a8f746e08d0660c6b3afce3f7b369b534a4932d8c049c7bff051ffb390dbc5e.scope: Deactivated successfully.
Nov 26 07:39:05 np0005536586 podman[100607]: 2025-11-26 12:39:05.379296796 +0000 UTC m=+0.017301884 container died 6a8f746e08d0660c6b3afce3f7b369b534a4932d8c049c7bff051ffb390dbc5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_allen, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:39:05 np0005536586 systemd[1]: var-lib-containers-storage-overlay-c7e0f2721c0500418cde0276b1dda1e8b1a977a94309983a40be0e97c204d489-merged.mount: Deactivated successfully.
Nov 26 07:39:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 26 07:39:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 26 07:39:05 np0005536586 podman[100607]: 2025-11-26 12:39:05.407277361 +0000 UTC m=+0.045282459 container remove 6a8f746e08d0660c6b3afce3f7b369b534a4932d8c049c7bff051ffb390dbc5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:39:05 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 26 07:39:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 26 07:39:05 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2584753622' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 26 07:39:05 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 38 pg[11.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:05 np0005536586 systemd[1]: libpod-conmon-6a8f746e08d0660c6b3afce3f7b369b534a4932d8c049c7bff051ffb390dbc5e.scope: Deactivated successfully.
Nov 26 07:39:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:39:05 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:39:05 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:05 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 753f5091-b0da-4503-82c4-6a97d089162b does not exist
Nov 26 07:39:05 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev dd8122ff-bad0-48a8-991c-71dbe10fcaed does not exist
Nov 26 07:39:05 np0005536586 python3[100644]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:39:05 np0005536586 podman[100719]: 2025-11-26 12:39:05.618683535 +0000 UTC m=+0.030645208 container create 7baf86dadac48501a29a3bf52a9e35c49c86f840404f676ff5847d2a22181b60 (image=quay.io/ceph/ceph:v18, name=sleepy_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:39:05 np0005536586 systemd[1]: Started libpod-conmon-7baf86dadac48501a29a3bf52a9e35c49c86f840404f676ff5847d2a22181b60.scope.
Nov 26 07:39:05 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:05 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1fa87f97aff1235f49a60c232b184beef08c8db3c15501390859f11a0986ff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:05 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1fa87f97aff1235f49a60c232b184beef08c8db3c15501390859f11a0986ff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:05 np0005536586 podman[100719]: 2025-11-26 12:39:05.663194269 +0000 UTC m=+0.075155952 container init 7baf86dadac48501a29a3bf52a9e35c49c86f840404f676ff5847d2a22181b60 (image=quay.io/ceph/ceph:v18, name=sleepy_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:39:05 np0005536586 podman[100719]: 2025-11-26 12:39:05.667658342 +0000 UTC m=+0.079620024 container start 7baf86dadac48501a29a3bf52a9e35c49c86f840404f676ff5847d2a22181b60 (image=quay.io/ceph/ceph:v18, name=sleepy_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:39:05 np0005536586 podman[100719]: 2025-11-26 12:39:05.668863463 +0000 UTC m=+0.080825145 container attach 7baf86dadac48501a29a3bf52a9e35c49c86f840404f676ff5847d2a22181b60 (image=quay.io/ceph/ceph:v18, name=sleepy_clarke, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 26 07:39:05 np0005536586 ceph-mds[99300]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Nov 26 07:39:05 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mds-cephfs-compute-0-ipyiim[99296]: 2025-11-26T12:39:05.686+0000 7f641120d640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Nov 26 07:39:05 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/2584753622' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 26 07:39:05 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:05 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:05 np0005536586 podman[100719]: 2025-11-26 12:39:05.605718565 +0000 UTC m=+0.017680247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:39:05 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v74: 135 pgs: 1 unknown, 1 creating+peering, 1 active+clean+scrubbing, 132 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s
Nov 26 07:39:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:39:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:39:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:39:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:39:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:39:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:39:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:39:06 np0005536586 podman[100886]: 2025-11-26 12:39:06.065680147 +0000 UTC m=+0.036600469 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/607187607' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 26 07:39:06 np0005536586 sleepy_clarke[100758]: mimic
Nov 26 07:39:06 np0005536586 systemd[1]: libpod-7baf86dadac48501a29a3bf52a9e35c49c86f840404f676ff5847d2a22181b60.scope: Deactivated successfully.
Nov 26 07:39:06 np0005536586 podman[100719]: 2025-11-26 12:39:06.1222049 +0000 UTC m=+0.534166571 container died 7baf86dadac48501a29a3bf52a9e35c49c86f840404f676ff5847d2a22181b60 (image=quay.io/ceph/ceph:v18, name=sleepy_clarke, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:39:06 np0005536586 systemd[1]: var-lib-containers-storage-overlay-eb1fa87f97aff1235f49a60c232b184beef08c8db3c15501390859f11a0986ff-merged.mount: Deactivated successfully.
Nov 26 07:39:06 np0005536586 podman[100719]: 2025-11-26 12:39:06.147635971 +0000 UTC m=+0.559597643 container remove 7baf86dadac48501a29a3bf52a9e35c49c86f840404f676ff5847d2a22181b60 (image=quay.io/ceph/ceph:v18, name=sleepy_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:39:06 np0005536586 systemd[1]: libpod-conmon-7baf86dadac48501a29a3bf52a9e35c49c86f840404f676ff5847d2a22181b60.scope: Deactivated successfully.
Nov 26 07:39:06 np0005536586 podman[100914]: 2025-11-26 12:39:06.200864612 +0000 UTC m=+0.046918272 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:39:06 np0005536586 podman[100886]: 2025-11-26 12:39:06.202846868 +0000 UTC m=+0.173767180 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2584753622' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2584753622' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 26 07:39:06 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 39 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:06 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 9ba173af-d5e2-4452-a4cd-b9fe7207edbb does not exist
Nov 26 07:39:06 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 34cf9d9d-7229-4e52-84e5-34ba2ff0c250 does not exist
Nov 26 07:39:06 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev e1d9600a-fce1-421e-a23d-97c06b55f878 does not exist
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/2584753622' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/2584753622' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:06 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:39:06 np0005536586 python3[101155]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:39:06 np0005536586 podman[101175]: 2025-11-26 12:39:06.992569797 +0000 UTC m=+0.027108453 container create a6b7ba86976899ab958595050282bb48b83e02ddc72f276273ef1f1ddef59792 (image=quay.io/ceph/ceph:v18, name=festive_mclean, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:39:07 np0005536586 systemd[1]: Started libpod-conmon-a6b7ba86976899ab958595050282bb48b83e02ddc72f276273ef1f1ddef59792.scope.
Nov 26 07:39:07 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:07 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998639a2a36c9e94feabfd164c87c5c88310bfa059ac3c04ff794d775adfcbce/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:07 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998639a2a36c9e94feabfd164c87c5c88310bfa059ac3c04ff794d775adfcbce/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:07 np0005536586 podman[101175]: 2025-11-26 12:39:07.038876686 +0000 UTC m=+0.073415342 container init a6b7ba86976899ab958595050282bb48b83e02ddc72f276273ef1f1ddef59792 (image=quay.io/ceph/ceph:v18, name=festive_mclean, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:39:07 np0005536586 podman[101175]: 2025-11-26 12:39:07.046809403 +0000 UTC m=+0.081348049 container start a6b7ba86976899ab958595050282bb48b83e02ddc72f276273ef1f1ddef59792 (image=quay.io/ceph/ceph:v18, name=festive_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 07:39:07 np0005536586 podman[101175]: 2025-11-26 12:39:07.049814937 +0000 UTC m=+0.084353593 container attach a6b7ba86976899ab958595050282bb48b83e02ddc72f276273ef1f1ddef59792 (image=quay.io/ceph/ceph:v18, name=festive_mclean, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:39:07 np0005536586 podman[101175]: 2025-11-26 12:39:06.981648016 +0000 UTC m=+0.016186672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:39:07 np0005536586 podman[101203]: 2025-11-26 12:39:07.100677991 +0000 UTC m=+0.032798675 container create 4d0bb7546970847f8765b48db3bfefe28cbe2f1a9f2da6639ea28fecbe500570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:39:07 np0005536586 systemd[1]: Started libpod-conmon-4d0bb7546970847f8765b48db3bfefe28cbe2f1a9f2da6639ea28fecbe500570.scope.
Nov 26 07:39:07 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:07 np0005536586 podman[101203]: 2025-11-26 12:39:07.142457108 +0000 UTC m=+0.074577781 container init 4d0bb7546970847f8765b48db3bfefe28cbe2f1a9f2da6639ea28fecbe500570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kalam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:39:07 np0005536586 podman[101203]: 2025-11-26 12:39:07.146362356 +0000 UTC m=+0.078483040 container start 4d0bb7546970847f8765b48db3bfefe28cbe2f1a9f2da6639ea28fecbe500570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kalam, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:39:07 np0005536586 podman[101203]: 2025-11-26 12:39:07.147698976 +0000 UTC m=+0.079819680 container attach 4d0bb7546970847f8765b48db3bfefe28cbe2f1a9f2da6639ea28fecbe500570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kalam, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:39:07 np0005536586 dreamy_kalam[101216]: 167 167
Nov 26 07:39:07 np0005536586 systemd[1]: libpod-4d0bb7546970847f8765b48db3bfefe28cbe2f1a9f2da6639ea28fecbe500570.scope: Deactivated successfully.
Nov 26 07:39:07 np0005536586 podman[101203]: 2025-11-26 12:39:07.149901435 +0000 UTC m=+0.082022119 container died 4d0bb7546970847f8765b48db3bfefe28cbe2f1a9f2da6639ea28fecbe500570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kalam, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:39:07 np0005536586 systemd[1]: var-lib-containers-storage-overlay-f9369c15448f6c10862603c5690114f9bac20fe13b401160f08d987945de5eb0-merged.mount: Deactivated successfully.
Nov 26 07:39:07 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Nov 26 07:39:07 np0005536586 podman[101203]: 2025-11-26 12:39:07.172503135 +0000 UTC m=+0.104623819 container remove 4d0bb7546970847f8765b48db3bfefe28cbe2f1a9f2da6639ea28fecbe500570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:39:07 np0005536586 podman[101203]: 2025-11-26 12:39:07.089703782 +0000 UTC m=+0.021824486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:39:07 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Nov 26 07:39:07 np0005536586 systemd[1]: libpod-conmon-4d0bb7546970847f8765b48db3bfefe28cbe2f1a9f2da6639ea28fecbe500570.scope: Deactivated successfully.
Nov 26 07:39:07 np0005536586 podman[101238]: 2025-11-26 12:39:07.286930958 +0000 UTC m=+0.028099110 container create f069e06cfdf27573683306b1d537258cdf0e2f506247ab98e3bb26420434964c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_allen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:39:07 np0005536586 systemd[1]: Started libpod-conmon-f069e06cfdf27573683306b1d537258cdf0e2f506247ab98e3bb26420434964c.scope.
Nov 26 07:39:07 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:07 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a8a9587addb35c86d654d0d5c32b1ee6c591973e458aafe2d172f41e2e8892/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:07 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a8a9587addb35c86d654d0d5c32b1ee6c591973e458aafe2d172f41e2e8892/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:07 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a8a9587addb35c86d654d0d5c32b1ee6c591973e458aafe2d172f41e2e8892/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:07 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a8a9587addb35c86d654d0d5c32b1ee6c591973e458aafe2d172f41e2e8892/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:07 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a8a9587addb35c86d654d0d5c32b1ee6c591973e458aafe2d172f41e2e8892/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:07 np0005536586 podman[101238]: 2025-11-26 12:39:07.348672781 +0000 UTC m=+0.089840943 container init f069e06cfdf27573683306b1d537258cdf0e2f506247ab98e3bb26420434964c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:39:07 np0005536586 podman[101238]: 2025-11-26 12:39:07.353635813 +0000 UTC m=+0.094803966 container start f069e06cfdf27573683306b1d537258cdf0e2f506247ab98e3bb26420434964c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_allen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 07:39:07 np0005536586 podman[101238]: 2025-11-26 12:39:07.35559241 +0000 UTC m=+0.096760582 container attach f069e06cfdf27573683306b1d537258cdf0e2f506247ab98e3bb26420434964c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_allen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 07:39:07 np0005536586 podman[101238]: 2025-11-26 12:39:07.274900889 +0000 UTC m=+0.016069061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:39:07 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 26 07:39:07 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2584753622' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 26 07:39:07 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 26 07:39:07 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 26 07:39:07 np0005536586 radosgw[98869]: LDAP not started since no server URIs were provided in the configuration.
Nov 26 07:39:07 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-rgw-rgw-compute-0-cpfqrx[98865]: 2025-11-26T12:39:07.459+0000 7fdc13f5c940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 26 07:39:07 np0005536586 radosgw[98869]: framework: beast
Nov 26 07:39:07 np0005536586 radosgw[98869]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 26 07:39:07 np0005536586 radosgw[98869]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 26 07:39:07 np0005536586 radosgw[98869]: starting handler: beast
Nov 26 07:39:07 np0005536586 radosgw[98869]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 07:39:07 np0005536586 radosgw[98869]: mgrc service_daemon_register rgw.14273 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC 7763 64-Core Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.cpfqrx,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865364,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=bf23e527-2843-459f-8ad0-cbeb0777daef,zone_name=default,zonegroup_id=20a26587-2166-4189-a102-225650a14516,zonegroup_name=default}
Nov 26 07:39:07 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 26 07:39:07 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/807273787' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 26 07:39:07 np0005536586 festive_mclean[101199]: 
Nov 26 07:39:07 np0005536586 festive_mclean[101199]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Nov 26 07:39:07 np0005536586 systemd[1]: libpod-a6b7ba86976899ab958595050282bb48b83e02ddc72f276273ef1f1ddef59792.scope: Deactivated successfully.
Nov 26 07:39:07 np0005536586 podman[101175]: 2025-11-26 12:39:07.558093164 +0000 UTC m=+0.592631810 container died a6b7ba86976899ab958595050282bb48b83e02ddc72f276273ef1f1ddef59792 (image=quay.io/ceph/ceph:v18, name=festive_mclean, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:39:07 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 26 07:39:07 np0005536586 systemd[1]: var-lib-containers-storage-overlay-998639a2a36c9e94feabfd164c87c5c88310bfa059ac3c04ff794d775adfcbce-merged.mount: Deactivated successfully.
Nov 26 07:39:07 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 26 07:39:07 np0005536586 podman[101175]: 2025-11-26 12:39:07.580713699 +0000 UTC m=+0.615252345 container remove a6b7ba86976899ab958595050282bb48b83e02ddc72f276273ef1f1ddef59792 (image=quay.io/ceph/ceph:v18, name=festive_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:39:07 np0005536586 systemd[1]: libpod-conmon-a6b7ba86976899ab958595050282bb48b83e02ddc72f276273ef1f1ddef59792.scope: Deactivated successfully.
Nov 26 07:39:07 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 26 07:39:07 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 26 07:39:07 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v77: 135 pgs: 1 unknown, 1 creating+peering, 133 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s
Nov 26 07:39:08 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 26 07:39:08 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 26 07:39:08 np0005536586 goofy_allen[101270]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:39:08 np0005536586 goofy_allen[101270]: --> relative data size: 1.0
Nov 26 07:39:08 np0005536586 goofy_allen[101270]: --> All data devices are unavailable
Nov 26 07:39:08 np0005536586 systemd[1]: libpod-f069e06cfdf27573683306b1d537258cdf0e2f506247ab98e3bb26420434964c.scope: Deactivated successfully.
Nov 26 07:39:08 np0005536586 podman[101852]: 2025-11-26 12:39:08.260374642 +0000 UTC m=+0.016082827 container died f069e06cfdf27573683306b1d537258cdf0e2f506247ab98e3bb26420434964c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 07:39:08 np0005536586 systemd[1]: var-lib-containers-storage-overlay-72a8a9587addb35c86d654d0d5c32b1ee6c591973e458aafe2d172f41e2e8892-merged.mount: Deactivated successfully.
Nov 26 07:39:08 np0005536586 podman[101852]: 2025-11-26 12:39:08.289644537 +0000 UTC m=+0.045352712 container remove f069e06cfdf27573683306b1d537258cdf0e2f506247ab98e3bb26420434964c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_allen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 07:39:08 np0005536586 systemd[1]: libpod-conmon-f069e06cfdf27573683306b1d537258cdf0e2f506247ab98e3bb26420434964c.scope: Deactivated successfully.
Nov 26 07:39:08 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 26 07:39:08 np0005536586 ceph-mon[74966]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 26 07:39:08 np0005536586 ceph-mon[74966]: from='client.? 192.168.122.100:0/2584753622' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 26 07:39:08 np0005536586 podman[101994]: 2025-11-26 12:39:08.723169983 +0000 UTC m=+0.032647521 container create 66cf840fac287b75a1d03f3316551b3ad9946a3c10b3e8789e2a07c0b7e1ecec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:39:08 np0005536586 systemd[1]: Started libpod-conmon-66cf840fac287b75a1d03f3316551b3ad9946a3c10b3e8789e2a07c0b7e1ecec.scope.
Nov 26 07:39:08 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:08 np0005536586 podman[101994]: 2025-11-26 12:39:08.769882015 +0000 UTC m=+0.079359573 container init 66cf840fac287b75a1d03f3316551b3ad9946a3c10b3e8789e2a07c0b7e1ecec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:39:08 np0005536586 podman[101994]: 2025-11-26 12:39:08.774460993 +0000 UTC m=+0.083938531 container start 66cf840fac287b75a1d03f3316551b3ad9946a3c10b3e8789e2a07c0b7e1ecec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_newton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 07:39:08 np0005536586 podman[101994]: 2025-11-26 12:39:08.775683598 +0000 UTC m=+0.085161135 container attach 66cf840fac287b75a1d03f3316551b3ad9946a3c10b3e8789e2a07c0b7e1ecec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 07:39:08 np0005536586 admiring_newton[102007]: 167 167
Nov 26 07:39:08 np0005536586 systemd[1]: libpod-66cf840fac287b75a1d03f3316551b3ad9946a3c10b3e8789e2a07c0b7e1ecec.scope: Deactivated successfully.
Nov 26 07:39:08 np0005536586 podman[101994]: 2025-11-26 12:39:08.777746285 +0000 UTC m=+0.087223822 container died 66cf840fac287b75a1d03f3316551b3ad9946a3c10b3e8789e2a07c0b7e1ecec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 07:39:08 np0005536586 systemd[1]: var-lib-containers-storage-overlay-e8147b9bdf171642259c9be3b06a26a9bb66f14e72a7b8396766826270b7fb5f-merged.mount: Deactivated successfully.
Nov 26 07:39:08 np0005536586 podman[101994]: 2025-11-26 12:39:08.798594099 +0000 UTC m=+0.108071637 container remove 66cf840fac287b75a1d03f3316551b3ad9946a3c10b3e8789e2a07c0b7e1ecec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_newton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 07:39:08 np0005536586 podman[101994]: 2025-11-26 12:39:08.710541988 +0000 UTC m=+0.020019555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:39:08 np0005536586 systemd[1]: libpod-conmon-66cf840fac287b75a1d03f3316551b3ad9946a3c10b3e8789e2a07c0b7e1ecec.scope: Deactivated successfully.
Nov 26 07:39:08 np0005536586 podman[102031]: 2025-11-26 12:39:08.914636063 +0000 UTC m=+0.028646512 container create 04dd892334d211c5bac12bde5c99fb0872b4880b33587e72a8aa86bc5c5d8509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:39:08 np0005536586 systemd[1]: Started libpod-conmon-04dd892334d211c5bac12bde5c99fb0872b4880b33587e72a8aa86bc5c5d8509.scope.
Nov 26 07:39:08 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:08 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21fdc469917721c78bdf26a2d9e14d65b1153748658ce1f96d8c14765196c09e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:08 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21fdc469917721c78bdf26a2d9e14d65b1153748658ce1f96d8c14765196c09e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:08 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21fdc469917721c78bdf26a2d9e14d65b1153748658ce1f96d8c14765196c09e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:08 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21fdc469917721c78bdf26a2d9e14d65b1153748658ce1f96d8c14765196c09e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:08 np0005536586 podman[102031]: 2025-11-26 12:39:08.967786829 +0000 UTC m=+0.081797307 container init 04dd892334d211c5bac12bde5c99fb0872b4880b33587e72a8aa86bc5c5d8509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:39:08 np0005536586 podman[102031]: 2025-11-26 12:39:08.9733701 +0000 UTC m=+0.087380548 container start 04dd892334d211c5bac12bde5c99fb0872b4880b33587e72a8aa86bc5c5d8509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 07:39:08 np0005536586 podman[102031]: 2025-11-26 12:39:08.974486503 +0000 UTC m=+0.088496952 container attach 04dd892334d211c5bac12bde5c99fb0872b4880b33587e72a8aa86bc5c5d8509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 07:39:08 np0005536586 podman[102031]: 2025-11-26 12:39:08.903180257 +0000 UTC m=+0.017190725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:39:09 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 26 07:39:09 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 26 07:39:09 np0005536586 ceph-mon[74966]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 26 07:39:09 np0005536586 ceph-mon[74966]: Cluster is now healthy
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]: {
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:    "0": [
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:        {
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "devices": [
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "/dev/loop3"
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            ],
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "lv_name": "ceph_lv0",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "lv_size": "21470642176",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "name": "ceph_lv0",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "tags": {
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.cluster_name": "ceph",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.crush_device_class": "",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.encrypted": "0",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.osd_id": "0",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.type": "block",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.vdo": "0"
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            },
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "type": "block",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "vg_name": "ceph_vg0"
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:        }
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:    ],
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:    "1": [
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:        {
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "devices": [
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "/dev/loop4"
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            ],
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "lv_name": "ceph_lv1",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "lv_size": "21470642176",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "name": "ceph_lv1",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "tags": {
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.cluster_name": "ceph",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.crush_device_class": "",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.encrypted": "0",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.osd_id": "1",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.type": "block",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.vdo": "0"
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            },
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "type": "block",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "vg_name": "ceph_vg1"
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:        }
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:    ],
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:    "2": [
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:        {
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "devices": [
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "/dev/loop5"
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            ],
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "lv_name": "ceph_lv2",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "lv_size": "21470642176",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "name": "ceph_lv2",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "tags": {
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.cluster_name": "ceph",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.crush_device_class": "",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.encrypted": "0",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.osd_id": "2",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.type": "block",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:                "ceph.vdo": "0"
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            },
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "type": "block",
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:            "vg_name": "ceph_vg2"
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:        }
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]:    ]
Nov 26 07:39:09 np0005536586 competent_sanderson[102044]: }
Nov 26 07:39:09 np0005536586 systemd[1]: libpod-04dd892334d211c5bac12bde5c99fb0872b4880b33587e72a8aa86bc5c5d8509.scope: Deactivated successfully.
Nov 26 07:39:09 np0005536586 podman[102031]: 2025-11-26 12:39:09.605886926 +0000 UTC m=+0.719897375 container died 04dd892334d211c5bac12bde5c99fb0872b4880b33587e72a8aa86bc5c5d8509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 07:39:09 np0005536586 systemd[1]: var-lib-containers-storage-overlay-21fdc469917721c78bdf26a2d9e14d65b1153748658ce1f96d8c14765196c09e-merged.mount: Deactivated successfully.
Nov 26 07:39:09 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 26 07:39:09 np0005536586 podman[102031]: 2025-11-26 12:39:09.637141982 +0000 UTC m=+0.751152431 container remove 04dd892334d211c5bac12bde5c99fb0872b4880b33587e72a8aa86bc5c5d8509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 07:39:09 np0005536586 systemd[1]: libpod-conmon-04dd892334d211c5bac12bde5c99fb0872b4880b33587e72a8aa86bc5c5d8509.scope: Deactivated successfully.
Nov 26 07:39:09 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 26 07:39:09 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v78: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 7.0 KiB/s wr, 204 op/s
Nov 26 07:39:10 np0005536586 podman[102193]: 2025-11-26 12:39:10.035575693 +0000 UTC m=+0.025536671 container create ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:39:10 np0005536586 systemd[1]: Started libpod-conmon-ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4.scope.
Nov 26 07:39:10 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:10 np0005536586 podman[102193]: 2025-11-26 12:39:10.075538887 +0000 UTC m=+0.065499875 container init ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 07:39:10 np0005536586 podman[102193]: 2025-11-26 12:39:10.079840964 +0000 UTC m=+0.069801932 container start ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galileo, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 07:39:10 np0005536586 podman[102193]: 2025-11-26 12:39:10.080917112 +0000 UTC m=+0.070878080 container attach ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:39:10 np0005536586 flamboyant_galileo[102206]: 167 167
Nov 26 07:39:10 np0005536586 systemd[1]: libpod-ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4.scope: Deactivated successfully.
Nov 26 07:39:10 np0005536586 conmon[102206]: conmon ced88ad3a1f3f3dfdfd5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4.scope/container/memory.events
Nov 26 07:39:10 np0005536586 podman[102193]: 2025-11-26 12:39:10.08399407 +0000 UTC m=+0.073955039 container died ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galileo, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 07:39:10 np0005536586 systemd[1]: var-lib-containers-storage-overlay-6eb5674a165bb980a1f94071837bddf5216f10643f0ef186624c5fe983d30ef4-merged.mount: Deactivated successfully.
Nov 26 07:39:10 np0005536586 podman[102193]: 2025-11-26 12:39:10.101696388 +0000 UTC m=+0.091657356 container remove ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 26 07:39:10 np0005536586 podman[102193]: 2025-11-26 12:39:10.025031494 +0000 UTC m=+0.014992482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:39:10 np0005536586 systemd[1]: libpod-conmon-ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4.scope: Deactivated successfully.
Nov 26 07:39:10 np0005536586 podman[102228]: 2025-11-26 12:39:10.208811991 +0000 UTC m=+0.025837619 container create 2bba75926e00b147f46f3a4e0440f7d791860c6c2edf698d52a500e8a12cd5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dubinsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 07:39:10 np0005536586 systemd[1]: Started libpod-conmon-2bba75926e00b147f46f3a4e0440f7d791860c6c2edf698d52a500e8a12cd5c2.scope.
Nov 26 07:39:10 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:10 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0cfb07f2951e6cd84d336ca2c8800edb22c915cb8b708c700fe5dcefb62e002/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:10 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0cfb07f2951e6cd84d336ca2c8800edb22c915cb8b708c700fe5dcefb62e002/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:10 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0cfb07f2951e6cd84d336ca2c8800edb22c915cb8b708c700fe5dcefb62e002/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:10 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0cfb07f2951e6cd84d336ca2c8800edb22c915cb8b708c700fe5dcefb62e002/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:10 np0005536586 podman[102228]: 2025-11-26 12:39:10.26095078 +0000 UTC m=+0.077976417 container init 2bba75926e00b147f46f3a4e0440f7d791860c6c2edf698d52a500e8a12cd5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dubinsky, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 07:39:10 np0005536586 podman[102228]: 2025-11-26 12:39:10.265571647 +0000 UTC m=+0.082597264 container start 2bba75926e00b147f46f3a4e0440f7d791860c6c2edf698d52a500e8a12cd5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 07:39:10 np0005536586 podman[102228]: 2025-11-26 12:39:10.266851569 +0000 UTC m=+0.083877196 container attach 2bba75926e00b147f46f3a4e0440f7d791860c6c2edf698d52a500e8a12cd5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dubinsky, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:39:10 np0005536586 podman[102228]: 2025-11-26 12:39:10.198537742 +0000 UTC m=+0.015563389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:39:10 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 26 07:39:10 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 26 07:39:10 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]: {
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:        "osd_id": 1,
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:        "type": "bluestore"
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:    },
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:        "osd_id": 2,
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:        "type": "bluestore"
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:    },
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:        "osd_id": 0,
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:        "type": "bluestore"
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]:    }
Nov 26 07:39:11 np0005536586 admiring_dubinsky[102241]: }
Nov 26 07:39:11 np0005536586 systemd[1]: libpod-2bba75926e00b147f46f3a4e0440f7d791860c6c2edf698d52a500e8a12cd5c2.scope: Deactivated successfully.
Nov 26 07:39:11 np0005536586 podman[102228]: 2025-11-26 12:39:11.039628065 +0000 UTC m=+0.856653722 container died 2bba75926e00b147f46f3a4e0440f7d791860c6c2edf698d52a500e8a12cd5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:39:11 np0005536586 systemd[1]: var-lib-containers-storage-overlay-e0cfb07f2951e6cd84d336ca2c8800edb22c915cb8b708c700fe5dcefb62e002-merged.mount: Deactivated successfully.
Nov 26 07:39:11 np0005536586 podman[102228]: 2025-11-26 12:39:11.070316452 +0000 UTC m=+0.887342079 container remove 2bba75926e00b147f46f3a4e0440f7d791860c6c2edf698d52a500e8a12cd5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:39:11 np0005536586 systemd[1]: libpod-conmon-2bba75926e00b147f46f3a4e0440f7d791860c6c2edf698d52a500e8a12cd5c2.scope: Deactivated successfully.
Nov 26 07:39:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:39:11 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:39:11 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:11 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 2a080a50-abb6-4a13-b871-f6322681eddf does not exist
Nov 26 07:39:11 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev efdb0c01-1ef0-4add-ba55-274f2b31e62a does not exist
Nov 26 07:39:11 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:11 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:11 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 26 07:39:11 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 26 07:39:11 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v79: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 5.0 KiB/s wr, 170 op/s
Nov 26 07:39:12 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 26 07:39:12 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 26 07:39:13 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 26 07:39:13 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 26 07:39:13 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.1b deep-scrub starts
Nov 26 07:39:13 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.1b deep-scrub ok
Nov 26 07:39:13 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v80: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.0 KiB/s wr, 137 op/s
Nov 26 07:39:14 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 26 07:39:14 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 26 07:39:15 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v81: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 3.4 KiB/s wr, 116 op/s
Nov 26 07:39:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:39:16 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 26 07:39:16 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 26 07:39:16 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 26 07:39:16 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 26 07:39:17 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 26 07:39:17 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 26 07:39:17 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v82: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.1 KiB/s wr, 105 op/s
Nov 26 07:39:18 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 26 07:39:18 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 26 07:39:19 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 26 07:39:19 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 26 07:39:19 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v83: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 91 op/s
Nov 26 07:39:20 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 26 07:39:20 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 26 07:39:20 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 26 07:39:20 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 26 07:39:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:39:21 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 26 07:39:21 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 26 07:39:21 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v84: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:39:23 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v85: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:39:24 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 26 07:39:24 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 26 07:39:25 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 26 07:39:25 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 26 07:39:25 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v86: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:39:25 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:39:26 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Nov 26 07:39:26 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Nov 26 07:39:26 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 26 07:39:26 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 26 07:39:27 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 26 07:39:27 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 26 07:39:27 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v87: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:39:28 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 26 07:39:28 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 26 07:39:29 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 26 07:39:29 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 26 07:39:29 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v88: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:39:30 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 26 07:39:30 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 26 07:39:30 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:39:31 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v89: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:39:32 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 26 07:39:32 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 26 07:39:33 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v90: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:39:34 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 26 07:39:34 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:39:35
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'vms']
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v91: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:39:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:39:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:39:37 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v92: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:39:38 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 26 07:39:38 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 07:39:39 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Nov 26 07:39:39 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 26 07:39:39 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 26 07:39:39 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v93: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:39:39 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 26 07:39:39 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 26 07:39:39 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 26 07:39:39 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 26 07:39:39 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 26 07:39:39 np0005536586 ceph-mgr[75236]: [progress INFO root] update: starting ev 36de624a-9d50-44b1-bd23-2697e369fb1b (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 26 07:39:39 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 07:39:39 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 07:39:40 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 26 07:39:40 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 26 07:39:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:39:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 26 07:39:40 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 26 07:39:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 26 07:39:40 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 26 07:39:40 np0005536586 ceph-mgr[75236]: [progress INFO root] update: starting ev 97381620-0fc6-4cf2-8054-02242571a1cf (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 26 07:39:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 07:39:40 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 07:39:40 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 26 07:39:40 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 07:39:41 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v96: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 26 07:39:41 np0005536586 ceph-mgr[75236]: [progress INFO root] update: starting ev 851076a2-72bd-4bf1-9f40-983640aeedea (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 07:39:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 26 07:39:42 np0005536586 python3[102359]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:39:42 np0005536586 podman[102360]: 2025-11-26 12:39:42.423016588 +0000 UTC m=+0.027848530 container create 1a374b14fa5e20bbad0c30fe2d68fb1680f5134aa726ea75689d6ab38efe8956 (image=quay.io/ceph/ceph:v18, name=jolly_nash, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 07:39:42 np0005536586 systemd[76457]: Starting Mark boot as successful...
Nov 26 07:39:42 np0005536586 systemd[76457]: Finished Mark boot as successful.
Nov 26 07:39:42 np0005536586 systemd[1]: Started libpod-conmon-1a374b14fa5e20bbad0c30fe2d68fb1680f5134aa726ea75689d6ab38efe8956.scope.
Nov 26 07:39:42 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:42 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67d27b75acf8122f491357cec77474032d89d931528d2bcdd57796859cbe957/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:42 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67d27b75acf8122f491357cec77474032d89d931528d2bcdd57796859cbe957/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:42 np0005536586 podman[102360]: 2025-11-26 12:39:42.463935635 +0000 UTC m=+0.068767577 container init 1a374b14fa5e20bbad0c30fe2d68fb1680f5134aa726ea75689d6ab38efe8956 (image=quay.io/ceph/ceph:v18, name=jolly_nash, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 07:39:42 np0005536586 podman[102360]: 2025-11-26 12:39:42.468713231 +0000 UTC m=+0.073545173 container start 1a374b14fa5e20bbad0c30fe2d68fb1680f5134aa726ea75689d6ab38efe8956 (image=quay.io/ceph/ceph:v18, name=jolly_nash, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 07:39:42 np0005536586 podman[102360]: 2025-11-26 12:39:42.469931598 +0000 UTC m=+0.074763540 container attach 1a374b14fa5e20bbad0c30fe2d68fb1680f5134aa726ea75689d6ab38efe8956 (image=quay.io/ceph/ceph:v18, name=jolly_nash, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:39:42 np0005536586 podman[102360]: 2025-11-26 12:39:42.41150944 +0000 UTC m=+0.016341402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=43 pruub=15.745674133s) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active pruub 98.196243286s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=43 pruub=15.745674133s) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown pruub 98.196243286s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 jolly_nash[102373]: could not fetch user info: no user info saved
Nov 26 07:39:42 np0005536586 systemd[1]: libpod-1a374b14fa5e20bbad0c30fe2d68fb1680f5134aa726ea75689d6ab38efe8956.scope: Deactivated successfully.
Nov 26 07:39:42 np0005536586 podman[102458]: 2025-11-26 12:39:42.602191903 +0000 UTC m=+0.016183272 container died 1a374b14fa5e20bbad0c30fe2d68fb1680f5134aa726ea75689d6ab38efe8956 (image=quay.io/ceph/ceph:v18, name=jolly_nash, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 07:39:42 np0005536586 systemd[1]: var-lib-containers-storage-overlay-b67d27b75acf8122f491357cec77474032d89d931528d2bcdd57796859cbe957-merged.mount: Deactivated successfully.
Nov 26 07:39:42 np0005536586 podman[102458]: 2025-11-26 12:39:42.623277828 +0000 UTC m=+0.037269208 container remove 1a374b14fa5e20bbad0c30fe2d68fb1680f5134aa726ea75689d6ab38efe8956 (image=quay.io/ceph/ceph:v18, name=jolly_nash, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:39:42 np0005536586 systemd[1]: libpod-conmon-1a374b14fa5e20bbad0c30fe2d68fb1680f5134aa726ea75689d6ab38efe8956.scope: Deactivated successfully.
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 43 pg[6.0( v 37'39 (0'0,37'39] local-lis/les=21/22 n=22 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=43 pruub=14.548981667s) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 33'38 mlcod 33'38 active pruub 100.505180359s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 43 pg[6.0( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=43 pruub=14.548981667s) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 33'38 mlcod 0'0 unknown pruub 100.505180359s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 python3[102495]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:39:42 np0005536586 podman[102496]: 2025-11-26 12:39:42.901322013 +0000 UTC m=+0.030112596 container create 156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe (image=quay.io/ceph/ceph:v18, name=strange_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:39:42 np0005536586 systemd[1]: Started libpod-conmon-156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe.scope.
Nov 26 07:39:42 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:39:42 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db3a483c340c786e50e4a217f5aa0fdfc4b472053744572ecc1af3bf85862d6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:42 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db3a483c340c786e50e4a217f5aa0fdfc4b472053744572ecc1af3bf85862d6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:39:42 np0005536586 podman[102496]: 2025-11-26 12:39:42.95810606 +0000 UTC m=+0.086896663 container init 156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe (image=quay.io/ceph/ceph:v18, name=strange_ardinghelli, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:39:42 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 26 07:39:42 np0005536586 podman[102496]: 2025-11-26 12:39:42.96241593 +0000 UTC m=+0.091206513 container start 156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe (image=quay.io/ceph/ceph:v18, name=strange_ardinghelli, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:39:42 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 26 07:39:42 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 26 07:39:42 np0005536586 podman[102496]: 2025-11-26 12:39:42.964724301 +0000 UTC m=+0.093514884 container attach 156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe (image=quay.io/ceph/ceph:v18, name=strange_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:39:42 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.0( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 33'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-mgr[75236]: [progress INFO root] update: starting ev 36e79091-7762-417b-919b-09dffa4a735f (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=43/44 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 07:39:42 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:42 np0005536586 podman[102496]: 2025-11-26 12:39:42.889149616 +0000 UTC m=+0.017940219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]: {
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    "user_id": "openstack",
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    "display_name": "openstack",
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    "email": "",
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    "suspended": 0,
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    "max_buckets": 1000,
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    "subusers": [],
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    "keys": [
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:        {
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:            "user": "openstack",
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:            "access_key": "WYI892EBCV9E5ADCPWUD",
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:            "secret_key": "DFGAh7AOOJhuNdPraF6w21clnbHt7zF6Fw4Ep8TV"
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:        }
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    ],
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    "swift_keys": [],
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    "caps": [],
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    "op_mask": "read, write, delete",
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    "default_placement": "",
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    "default_storage_class": "",
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    "placement_tags": [],
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    "bucket_quota": {
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:        "enabled": false,
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:        "check_on_raw": false,
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:        "max_size": -1,
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:        "max_size_kb": 0,
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:        "max_objects": -1
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    },
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    "user_quota": {
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:        "enabled": false,
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:        "check_on_raw": false,
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:        "max_size": -1,
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:        "max_size_kb": 0,
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:        "max_objects": -1
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    },
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    "temp_url_keys": [],
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    "type": "rgw",
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]:    "mfa_ids": []
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]: }
Nov 26 07:39:43 np0005536586 strange_ardinghelli[102508]: 
Nov 26 07:39:43 np0005536586 systemd[1]: libpod-156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe.scope: Deactivated successfully.
Nov 26 07:39:43 np0005536586 conmon[102508]: conmon 156df5fa25748864d732 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe.scope/container/memory.events
Nov 26 07:39:43 np0005536586 podman[102593]: 2025-11-26 12:39:43.10415895 +0000 UTC m=+0.015921848 container died 156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe (image=quay.io/ceph/ceph:v18, name=strange_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 07:39:43 np0005536586 systemd[1]: var-lib-containers-storage-overlay-3db3a483c340c786e50e4a217f5aa0fdfc4b472053744572ecc1af3bf85862d6-merged.mount: Deactivated successfully.
Nov 26 07:39:43 np0005536586 podman[102593]: 2025-11-26 12:39:43.121315394 +0000 UTC m=+0.033078283 container remove 156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe (image=quay.io/ceph/ceph:v18, name=strange_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:39:43 np0005536586 systemd[1]: libpod-conmon-156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe.scope: Deactivated successfully.
Nov 26 07:39:43 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 26 07:39:43 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 26 07:39:43 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 26 07:39:43 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 26 07:39:43 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v99: 181 pgs: 46 unknown, 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:39:43 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 07:39:43 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 07:39:43 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 07:39:43 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 07:39:43 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 26 07:39:43 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 26 07:39:43 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 07:39:43 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 07:39:43 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 26 07:39:43 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 26 07:39:43 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 45 pg[9.0( v 44'389 (0'0,44'389] local-lis/les=34/35 n=177 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=14.435843468s) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 44'388 mlcod 44'388 active pruub 98.346565247s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:43 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 45 pg[8.0( v 33'4 (0'0,33'4] local-lis/les=32/33 n=4 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=45 pruub=12.434915543s) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 33'3 mlcod 33'3 active pruub 96.345756531s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:43 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 45 pg[8.0( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=45 pruub=12.434915543s) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 33'3 mlcod 0'0 unknown pruub 96.345756531s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:43 np0005536586 ceph-mgr[75236]: [progress INFO root] update: starting ev c680786d-647d-4d93-ac3b-06d054393d01 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 26 07:39:43 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 26 07:39:43 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 07:39:43 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 07:39:43 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 07:39:43 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 45 pg[9.0( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=14.435843468s) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 44'388 mlcod 0'0 unknown pruub 98.346565247s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:43 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 07:39:43 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 26 07:39:44 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 26 07:39:44 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 26 07:39:44 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 26 07:39:44 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 26 07:39:44 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 26 07:39:44 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.15( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.14( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.16( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.17( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.11( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-mgr[75236]: [progress INFO root] update: starting ev 80941aeb-ad6f-4e3b-99fd-e43e8972ed31 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.3( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 26 07:39:44 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 07:39:44 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 07:39:44 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1( v 33'4 (0'0,33'4] local-lis/les=32/33 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-mgr[75236]: [progress INFO root] complete: finished ev 36de624a-9d50-44b1-bd23-2697e369fb1b (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 26 07:39:44 np0005536586 ceph-mgr[75236]: [progress INFO root] Completed event 36de624a-9d50-44b1-bd23-2697e369fb1b (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Nov 26 07:39:44 np0005536586 ceph-mgr[75236]: [progress INFO root] complete: finished ev 97381620-0fc6-4cf2-8054-02242571a1cf (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 26 07:39:44 np0005536586 ceph-mgr[75236]: [progress INFO root] Completed event 97381620-0fc6-4cf2-8054-02242571a1cf (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.2( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-mgr[75236]: [progress INFO root] complete: finished ev 851076a2-72bd-4bf1-9f40-983640aeedea (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 26 07:39:44 np0005536586 ceph-mgr[75236]: [progress INFO root] Completed event 851076a2-72bd-4bf1-9f40-983640aeedea (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-mgr[75236]: [progress INFO root] complete: finished ev 36e79091-7762-417b-919b-09dffa4a735f (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 26 07:39:44 np0005536586 ceph-mgr[75236]: [progress INFO root] Completed event 36e79091-7762-417b-919b-09dffa4a735f (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Nov 26 07:39:44 np0005536586 ceph-mgr[75236]: [progress INFO root] complete: finished ev c680786d-647d-4d93-ac3b-06d054393d01 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 26 07:39:44 np0005536586 ceph-mgr[75236]: [progress INFO root] Completed event c680786d-647d-4d93-ac3b-06d054393d01 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Nov 26 07:39:44 np0005536586 ceph-mgr[75236]: [progress INFO root] complete: finished ev 80941aeb-ad6f-4e3b-99fd-e43e8972ed31 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 26 07:39:44 np0005536586 ceph-mgr[75236]: [progress INFO root] Completed event 80941aeb-ad6f-4e3b-99fd-e43e8972ed31 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.d( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.8( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.9( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.3( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.a( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.8( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.7( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.7( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.4( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.5( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.5( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1a( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.19( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1d( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.6( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.13( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.10( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.b( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1b( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.18( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.19( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.17( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.12( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.14( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.17( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.0( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 44'388 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.2( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.8( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.a( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.0( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 33'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.3( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.13( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.16( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.5( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1a( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.4( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.10( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.19( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.7( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.12( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.16( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.13( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:44 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:45 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 26 07:39:45 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 26 07:39:45 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 26 07:39:45 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 26 07:39:45 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v102: 243 pgs: 2 peering, 77 unknown, 164 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 511 B/s wr, 3 op/s
Nov 26 07:39:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 07:39:45 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 07:39:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 07:39:45 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 07:39:45 np0005536586 ceph-mgr[75236]: [progress INFO root] Writing back 15 completed events
Nov 26 07:39:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 07:39:45 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:39:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 26 07:39:45 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 26 07:39:45 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 07:39:45 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 07:39:45 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:39:45 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 07:39:45 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 07:39:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 26 07:39:45 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 26 07:39:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 47 pg[11.0( v 44'2 (0'0,44'2] local-lis/les=38/39 n=2 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=47 pruub=8.427951813s) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 44'1 mlcod 44'1 active pruub 94.355407715s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:45 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 47 pg[11.0( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=47 pruub=8.427951813s) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 44'1 mlcod 0'0 unknown pruub 94.355407715s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 26 07:39:46 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 26 07:39:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 26 07:39:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 26 07:39:46 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 26 07:39:46 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 07:39:46 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.16( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.13( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=1 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.14( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.4( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.5( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.6( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.7( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=38/39 n=1 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.19( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.10( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.16( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.13( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.7( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.0( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 44'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:46 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:47 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.5( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:47 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:47 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:47 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:47 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:47 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:47 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:47 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 44'63 active pruub 96.626022339s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 96.626022339s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 26 07:39:47 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 26 07:39:47 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v105: 305 pgs: 2 peering, 124 unknown, 179 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 511 B/s wr, 3 op/s
Nov 26 07:39:47 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 26 07:39:48 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 26 07:39:48 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 26 07:39:48 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 26 07:39:49 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 26 07:39:49 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 26 07:39:49 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v107: 305 pgs: 305 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:39:49 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 07:39:49 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 07:39:49 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 07:39:49 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 07:39:49 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 26 07:39:49 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 26 07:39:49 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 07:39:49 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 07:39:49 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 26 07:39:49 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 26 07:39:49 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 07:39:49 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.956858635s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 102.221313477s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957725525s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 102.222213745s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.956802368s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.221313477s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957687378s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.222213745s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957573891s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 102.222312927s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957553864s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.222312927s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957663536s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 102.222503662s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957644463s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.222503662s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957647324s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 102.222511292s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957631111s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.222511292s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957588196s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 102.222564697s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957574844s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.222564697s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.958526611s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 102.223617554s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.958513260s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.223617554s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.958328247s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 102.223571777s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.958296776s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.223571777s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.953216553s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.911293030s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.953197479s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.911293030s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.966938019s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925094604s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.966925621s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925094604s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.966979980s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.925231934s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.966959000s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925231934s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.952880859s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.911300659s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.952866554s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.911300659s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.966715813s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925521851s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.966698647s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925521851s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.972962379s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.931999207s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.972946167s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.931999207s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.950360298s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909515381s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.950347900s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909515381s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965970039s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.925201416s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965958595s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925201416s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.950175285s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909484863s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.950165749s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909484863s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965841293s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925209045s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965830803s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925209045s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.972534180s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.931983948s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.972522736s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.931983948s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.949930191s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909461975s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.949919701s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909461975s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.973234177s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932838440s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.973223686s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932838440s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965620041s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925300598s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965609550s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925300598s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965567589s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.925315857s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965557098s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925315857s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.949582100s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909431458s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.949571609s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909431458s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.972041130s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932014465s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.972028732s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932014465s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.951236725s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.911300659s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.951225281s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.911300659s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965163231s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925292969s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965151787s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925292969s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965291977s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.925491333s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965282440s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925491333s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.971770287s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932029724s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.971759796s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932029724s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.948816299s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909446716s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.948799133s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909446716s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.964583397s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925338745s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.964568138s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925338745s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.971268654s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932113647s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.971258163s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932113647s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.948448181s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909423828s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.948434830s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909423828s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.964313507s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925346375s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.964303970s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925346375s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.970659256s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932006836s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.970640182s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932006836s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.970690727s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932128906s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.970678329s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932128906s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963773727s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.925544739s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963752747s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925544739s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963482857s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.925445557s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963466644s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925445557s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963430405s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925460815s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963411331s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925460815s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.970579147s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932723999s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.970562935s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932723999s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963310242s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925514221s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963294983s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925514221s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.947049141s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909370422s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963112831s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925529480s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963096619s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925529480s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.970295906s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932785034s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.970280647s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932785034s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.947036743s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909370422s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.946689606s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909362793s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.946668625s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909378052s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.946654320s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909378052s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962790489s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.925582886s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962779999s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925582886s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.969814301s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932762146s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.969794273s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932762146s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.946678162s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909362793s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.946285248s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909416199s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.946269989s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909416199s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.946118355s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909347534s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.946105003s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909347534s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962831497s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926254272s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962813377s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926254272s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.969324112s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932777405s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.969307899s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932777405s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.945761681s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909332275s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.945750237s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909332275s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962580681s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926269531s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962568283s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926269531s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.964559555s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.928268433s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.969063759s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932830811s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.969052315s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932830811s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.964543343s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.928268433s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962424278s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926292419s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962412834s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926292419s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968824387s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932807922s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968811989s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932807922s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962259293s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926315308s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962218285s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926307678s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962239265s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926315308s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962206841s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926307678s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968664169s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932815552s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968652725s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932815552s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962080956s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926338196s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962067604s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926338196s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.961989403s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926330566s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.961977959s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926330566s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968335152s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932861328s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968203545s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932861328s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.961240768s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926361084s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.961174965s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926361084s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.967593193s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932884216s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.967578888s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932884216s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960934639s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926368713s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960920334s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926368713s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.943595886s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909278870s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.943580627s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909278870s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960314751s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926383972s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960297585s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926383972s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968079567s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934272766s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968064308s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934272766s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960093498s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926414490s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960080147s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926414490s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959667206s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926391602s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959650040s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926391602s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.967455864s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934303284s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.967440605s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934303284s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959156990s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926422119s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959138870s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926422119s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966730118s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934188843s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966711044s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934188843s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958840370s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926429749s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.941487312s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909187317s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.941465378s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909187317s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966361046s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934234619s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966345787s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934234619s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962287903s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925605774s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957566261s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925605774s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958274841s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926429749s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958259583s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926429749s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958201408s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926445007s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958189964s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926445007s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965899467s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934219360s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965879440s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934219360s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965804100s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934226990s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965792656s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934226990s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957959175s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926460266s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957945824s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926460266s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965680122s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934288025s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965665817s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934288025s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940426826s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909156799s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940408707s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909156799s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940348625s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909233093s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940299988s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909233093s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973402023s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222656250s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973369598s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973158836s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.222549438s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973132133s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.222549438s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971399307s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.220855713s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972857475s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222450256s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972840309s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222450256s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972937584s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222656250s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972923279s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972858429s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222679138s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972841263s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222679138s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972868919s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222824097s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972855568s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222824097s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973010063s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223129272s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972805977s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222976685s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972786903s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222976685s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972826004s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222915649s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972607613s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222915649s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972652435s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223014832s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.955779076s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926429749s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972763062s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223129272s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972627640s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223014832s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972572327s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223045349s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972557068s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223045349s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972480774s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223052979s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972464561s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223052979s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972475052s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.223068237s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972452164s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223068237s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972394943s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.223098755s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972373962s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223098755s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972376823s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223136902s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972361565s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223136902s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972357750s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223175049s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972344398s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223175049s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972307205s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.223182678s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972287178s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223182678s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972043037s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222984314s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972032547s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222984314s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971755981s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.223205566s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971536636s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223205566s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972020149s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223991394s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971908569s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223991394s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.970626831s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223220825s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971295357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.220855713s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.970546722s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223220825s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.934625626s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909156799s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.934603691s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909156799s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 26 07:39:50 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 44'48 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 44'50 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 44'46 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 44'54 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:51 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 07:39:51 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 07:39:51 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 26 07:39:51 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 07:39:51 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 26 07:39:51 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 07:39:51 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v110: 305 pgs: 305 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 107 B/s, 1 objects/s recovering
Nov 26 07:39:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 26 07:39:51 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 26 07:39:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 26 07:39:51 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 26 07:39:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 26 07:39:51 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 26 07:39:51 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 26 07:39:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 26 07:39:51 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 26 07:39:52 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 26 07:39:52 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 26 07:39:52 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 26 07:39:52 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 26 07:39:52 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 26 07:39:52 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090338707s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.222427368s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:52 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091548920s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.223670959s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:52 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090310097s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222427368s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:52 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091526985s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223670959s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:52 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090384483s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.222587585s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:52 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090363503s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222587585s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:52 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091360092s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.223678589s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:52 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091345787s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223678589s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:52 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:52 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 26 07:39:53 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 26 07:39:53 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 26 07:39:53 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 26 07:39:53 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.853003502s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815834045s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:53 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852621078s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815834045s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:53 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852263451s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815780640s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:53 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852128983s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815803528s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:53 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852112770s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815803528s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:53 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852163315s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815780640s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:53 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:53 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:53 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:53 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:53 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:53 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:53 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:53 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.e( v 37'39 lc 33'17 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:53 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:53 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:53 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 26 07:39:53 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 26 07:39:53 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v113: 305 pgs: 305 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 222 B/s, 2 objects/s recovering
Nov 26 07:39:53 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 26 07:39:53 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 26 07:39:53 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 26 07:39:53 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 26 07:39:54 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 26 07:39:54 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 26 07:39:54 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 26 07:39:54 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.915017128s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active pruub 106.882560730s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.848460197s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.816101074s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.848379135s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816101074s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914970398s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.882560730s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847923279s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815940857s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847882271s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815940857s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914413452s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active pruub 106.882606506s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847566605s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815841675s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847519875s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815841675s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914331436s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.882606506s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847449303s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815879822s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847414017s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815879822s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.918402672s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active pruub 106.887001038s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847307205s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815971375s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847074509s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815902710s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846937180s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815902710s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847107887s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.816116333s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847078323s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816116333s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847105026s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815971375s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.918376923s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.887001038s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.917899132s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active pruub 106.887329102s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.917868614s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.887329102s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846574783s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.816085815s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847175598s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.816719055s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846515656s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.816139221s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846462250s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816085815s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847414970s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.817054749s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846486092s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816139221s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847385406s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.817054749s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846344948s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.816062927s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846327782s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816062927s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846558571s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815994263s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846675873s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816719055s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.845700264s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815994263s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:54 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 26 07:39:54 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:54 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 26 07:39:54 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 26 07:39:54 np0005536586 systemd-logind[777]: New session 33 of user zuul.
Nov 26 07:39:54 np0005536586 systemd[1]: Started Session 33 of User zuul.
Nov 26 07:39:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 26 07:39:55 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 26 07:39:55 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 26 07:39:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 26 07:39:55 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 26 07:39:55 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:55 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:55 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:55 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:55 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:55 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:55 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:55 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:55 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:55 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:55 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:55 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:55 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:55 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:55 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:55 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:55 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:55 np0005536586 python3.9[102757]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:39:55 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v116: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 130 B/s, 1 objects/s recovering
Nov 26 07:39:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 26 07:39:55 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 26 07:39:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 26 07:39:55 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 26 07:39:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:39:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 26 07:39:56 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 26 07:39:56 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 26 07:39:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 26 07:39:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 26 07:39:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 26 07:39:56 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 26 07:39:56 np0005536586 python3.9[102975]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:39:56 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 26 07:39:56 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 26 07:39:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 26 07:39:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 26 07:39:57 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.764075279s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.222412109s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:57 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.763993263s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222412109s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:57 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.764774323s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.223670959s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:57 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.764751434s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223670959s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:57 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:57 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:57 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v118: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 835 B/s, 4 keys/s, 22 objects/s recovering
Nov 26 07:39:57 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 26 07:39:57 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 26 07:39:57 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 26 07:39:57 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 26 07:39:57 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 26 07:39:57 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 26 07:39:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 26 07:39:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 26 07:39:58 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 26 07:39:58 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 26 07:39:58 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 26 07:39:58 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 26 07:39:58 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 26 07:39:58 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884973526s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active pruub 106.880828857s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:58 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884924889s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.880828857s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:58 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884659767s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active pruub 106.880821228s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:39:58 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884625435s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.880821228s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:39:58 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.c( v 37'39 lc 33'16 (0'0,37'39] local-lis/les=56/57 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:58 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:58 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:39:58 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.4( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=56/57 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:58 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 26 07:39:58 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 26 07:39:59 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 26 07:39:59 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 26 07:39:59 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 26 07:39:59 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 26 07:39:59 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 26 07:39:59 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:59 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=57/58 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:39:59 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 26 07:39:59 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 26 07:39:59 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 26 07:39:59 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 26 07:39:59 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v121: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s, 4 keys/s, 21 objects/s recovering
Nov 26 07:39:59 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 26 07:39:59 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 26 07:39:59 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 26 07:39:59 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 26 07:40:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 26 07:40:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 26 07:40:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 26 07:40:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 26 07:40:00 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 26 07:40:00 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 26 07:40:00 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 26 07:40:00 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607535362s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.925239563s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:00 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607491493s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.925239563s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:00 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:00 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607428551s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.926414490s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:00 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607377052s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.926414490s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:00 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:00 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.613403320s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.932228088s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:00 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.605746269s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.925514221s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:00 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.612406731s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.932228088s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:00 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.605547905s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.925514221s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:00 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:00 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:00 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Nov 26 07:40:00 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Nov 26 07:40:00 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 26 07:40:00 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 26 07:40:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:40:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 26 07:40:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 26 07:40:00 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 26 07:40:00 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:00 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:00 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:00 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:00 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:00 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:00 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:00 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:00 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:00 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:00 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:00 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:00 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:00 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:00 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:00 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:01 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 26 07:40:01 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 26 07:40:01 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 26 07:40:01 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 26 07:40:01 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 26 07:40:01 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 26 07:40:01 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v124: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 430 B/s, 2 objects/s recovering
Nov 26 07:40:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 26 07:40:01 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 26 07:40:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 26 07:40:01 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 26 07:40:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 26 07:40:01 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 26 07:40:01 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 26 07:40:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 26 07:40:01 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 26 07:40:02 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 26 07:40:02 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 26 07:40:02 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 26 07:40:02 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 26 07:40:02 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:02 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:02 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:02 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:02 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.2 deep-scrub starts
Nov 26 07:40:02 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.2 deep-scrub ok
Nov 26 07:40:03 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 26 07:40:03 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 26 07:40:03 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 26 07:40:03 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:03 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:03 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:03 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:03 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:03 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:03 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:03 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:03 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415146828s) [2] async=[2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 118.441520691s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:03 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415066719s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.441520691s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:03 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.416400909s) [2] async=[2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 118.443046570s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:03 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.416225433s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443046570s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:03 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.416756630s) [2] async=[2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 118.443283081s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:03 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.416030884s) [2] async=[2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 118.443237305s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:03 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415893555s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443283081s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:03 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415829659s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443237305s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:03 np0005536586 systemd[1]: session-33.scope: Deactivated successfully.
Nov 26 07:40:03 np0005536586 systemd[1]: session-33.scope: Consumed 6.529s CPU time.
Nov 26 07:40:03 np0005536586 systemd-logind[777]: Session 33 logged out. Waiting for processes to exit.
Nov 26 07:40:03 np0005536586 systemd-logind[777]: Removed session 33.
Nov 26 07:40:03 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.765221596s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 active pruub 122.301292419s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:03 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.765181541s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.301292419s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:03 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.767482758s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 active pruub 122.303794861s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:03 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.767401695s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.303794861s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:03 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=14.757040977s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=44'389 mlcod 0'0 active pruub 121.293930054s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:03 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.766945839s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 active pruub 122.303878784s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:03 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.766919136s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.303878784s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:03 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=14.756837845s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 121.293930054s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:03 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:03 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:03 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:03 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:03 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v127: 305 pgs: 4 unknown, 4 peering, 297 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 26 07:40:04 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 26 07:40:04 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 26 07:40:04 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 26 07:40:04 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:04 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:04 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:04 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:04 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:04 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:04 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:04 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:04 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:04 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:04 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:04 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:04 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:04 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:04 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:04 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:04 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:04 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:04 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:04 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:04 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 26 07:40:04 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 26 07:40:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 26 07:40:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 26 07:40:05 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 26 07:40:05 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:05 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:05 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:05 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:05 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 26 07:40:05 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 26 07:40:05 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v130: 305 pgs: 4 unknown, 4 peering, 297 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:40:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:40:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:40:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:40:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:40:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:40:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:40:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:40:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 26 07:40:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 26 07:40:05 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 26 07:40:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:05 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65 pruub=15.676793098s) [2] async=[2] r=-1 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 44'389 active pruub 124.874816895s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:05 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676989555s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 44'389 active pruub 124.874961853s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:05 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.678121567s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 44'389 active pruub 124.876121521s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:05 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65 pruub=15.676681519s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874816895s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:05 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.677809715s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.876121521s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:05 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676671028s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874961853s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:05 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676982880s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 44'389 active pruub 124.874923706s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:05 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676142693s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874923706s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:06 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 26 07:40:06 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 26 07:40:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 26 07:40:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 26 07:40:06 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 26 07:40:06 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:06 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:06 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:06 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:07 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 26 07:40:07 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 26 07:40:07 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v133: 305 pgs: 4 unknown, 4 peering, 297 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:40:08 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 26 07:40:08 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 26 07:40:09 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v134: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 708 B/s wr, 23 op/s; 190 B/s, 6 objects/s recovering
Nov 26 07:40:09 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 26 07:40:09 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 26 07:40:09 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 26 07:40:09 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 26 07:40:10 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 26 07:40:10 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 26 07:40:10 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 26 07:40:10 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 26 07:40:10 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 26 07:40:10 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.871047974s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 124.926139832s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:10 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.870986938s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.926139832s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:10 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.873212814s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 124.928573608s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:10 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.873172760s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.928573608s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:10 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 26 07:40:10 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 26 07:40:10 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:10 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:10 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 67 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67 pruub=12.363718033s) [2] r=-1 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 126.222518921s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:10 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 67 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67 pruub=12.363677025s) [2] r=-1 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.222518921s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:10 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:10 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:40:10 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 26 07:40:10 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 26 07:40:10 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 26 07:40:10 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:10 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:10 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:10 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:10 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:10 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:10 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:10 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:10 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:40:11 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev bb324a4e-04ca-4ac3-bfe8-f63a00d6650c does not exist
Nov 26 07:40:11 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 61eebe65-bda6-43c3-87e1-5566fee4934e does not exist
Nov 26 07:40:11 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 76985d16-1c68-4761-9785-8967082678c9 does not exist
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:40:11 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v137: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 691 B/s wr, 23 op/s; 185 B/s, 6 objects/s recovering
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 26 07:40:11 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 26 07:40:11 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69 pruub=11.003540039s) [0] r=-1 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 122.887435913s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:11 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69 pruub=11.003467560s) [0] r=-1 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 122.887435913s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:11 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:12 np0005536586 podman[103292]: 2025-11-26 12:40:12.075956682 +0000 UTC m=+0.027584550 container create 19be1902a608eedb3cd018a7d14fef4e8b047fb683f40f55dc182b0678a78cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:40:12 np0005536586 systemd[1]: Started libpod-conmon-19be1902a608eedb3cd018a7d14fef4e8b047fb683f40f55dc182b0678a78cd5.scope.
Nov 26 07:40:12 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:40:12 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:40:12 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:40:12 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:40:12 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 26 07:40:12 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 26 07:40:12 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 26 07:40:12 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 26 07:40:12 np0005536586 podman[103292]: 2025-11-26 12:40:12.128329441 +0000 UTC m=+0.079957310 container init 19be1902a608eedb3cd018a7d14fef4e8b047fb683f40f55dc182b0678a78cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mccarthy, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:40:12 np0005536586 podman[103292]: 2025-11-26 12:40:12.132942243 +0000 UTC m=+0.084570110 container start 19be1902a608eedb3cd018a7d14fef4e8b047fb683f40f55dc182b0678a78cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mccarthy, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:40:12 np0005536586 podman[103292]: 2025-11-26 12:40:12.133967385 +0000 UTC m=+0.085595253 container attach 19be1902a608eedb3cd018a7d14fef4e8b047fb683f40f55dc182b0678a78cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 07:40:12 np0005536586 brave_mccarthy[103305]: 167 167
Nov 26 07:40:12 np0005536586 systemd[1]: libpod-19be1902a608eedb3cd018a7d14fef4e8b047fb683f40f55dc182b0678a78cd5.scope: Deactivated successfully.
Nov 26 07:40:12 np0005536586 podman[103292]: 2025-11-26 12:40:12.137465656 +0000 UTC m=+0.089093525 container died 19be1902a608eedb3cd018a7d14fef4e8b047fb683f40f55dc182b0678a78cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:40:12 np0005536586 systemd[1]: var-lib-containers-storage-overlay-5736097c2d694b8708a2d28f42edbe7700ffbd3d7455716d260e3d723feef7f1-merged.mount: Deactivated successfully.
Nov 26 07:40:12 np0005536586 podman[103292]: 2025-11-26 12:40:12.154272397 +0000 UTC m=+0.105900264 container remove 19be1902a608eedb3cd018a7d14fef4e8b047fb683f40f55dc182b0678a78cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mccarthy, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:40:12 np0005536586 podman[103292]: 2025-11-26 12:40:12.06412156 +0000 UTC m=+0.015749428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:40:12 np0005536586 systemd[1]: libpod-conmon-19be1902a608eedb3cd018a7d14fef4e8b047fb683f40f55dc182b0678a78cd5.scope: Deactivated successfully.
Nov 26 07:40:12 np0005536586 podman[103326]: 2025-11-26 12:40:12.268436864 +0000 UTC m=+0.029800667 container create da7289a9b28b4496a2c89e9f5eef9ad9cb04bd2d11fee6e67007d6f955c7e0ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_wilson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:40:12 np0005536586 systemd[1]: Started libpod-conmon-da7289a9b28b4496a2c89e9f5eef9ad9cb04bd2d11fee6e67007d6f955c7e0ea.scope.
Nov 26 07:40:12 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:40:12 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501435377abaece7c92bec312f4e3b34dcad36eea7783a720c22af11db9282c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:40:12 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501435377abaece7c92bec312f4e3b34dcad36eea7783a720c22af11db9282c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:40:12 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501435377abaece7c92bec312f4e3b34dcad36eea7783a720c22af11db9282c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:40:12 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501435377abaece7c92bec312f4e3b34dcad36eea7783a720c22af11db9282c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:40:12 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501435377abaece7c92bec312f4e3b34dcad36eea7783a720c22af11db9282c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:40:12 np0005536586 podman[103326]: 2025-11-26 12:40:12.333704783 +0000 UTC m=+0.095068585 container init da7289a9b28b4496a2c89e9f5eef9ad9cb04bd2d11fee6e67007d6f955c7e0ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:40:12 np0005536586 podman[103326]: 2025-11-26 12:40:12.337831158 +0000 UTC m=+0.099194961 container start da7289a9b28b4496a2c89e9f5eef9ad9cb04bd2d11fee6e67007d6f955c7e0ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_wilson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:40:12 np0005536586 podman[103326]: 2025-11-26 12:40:12.340309519 +0000 UTC m=+0.101673341 container attach da7289a9b28b4496a2c89e9f5eef9ad9cb04bd2d11fee6e67007d6f955c7e0ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 26 07:40:12 np0005536586 podman[103326]: 2025-11-26 12:40:12.256736326 +0000 UTC m=+0.018100128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:40:12 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 26 07:40:12 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 26 07:40:12 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 26 07:40:12 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:12 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:12 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 26 07:40:12 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 26 07:40:12 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 26 07:40:12 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 70 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=69/70 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:12 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 26 07:40:12 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.671307564s) [2] async=[2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 128.558670044s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:12 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.669959068s) [2] async=[2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 128.557617188s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:12 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.669912338s) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.557617188s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:12 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.670838356s) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.558670044s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:12 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:12 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:12 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:12 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:13 np0005536586 elastic_wilson[103339]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:40:13 np0005536586 elastic_wilson[103339]: --> relative data size: 1.0
Nov 26 07:40:13 np0005536586 elastic_wilson[103339]: --> All data devices are unavailable
Nov 26 07:40:13 np0005536586 systemd[1]: libpod-da7289a9b28b4496a2c89e9f5eef9ad9cb04bd2d11fee6e67007d6f955c7e0ea.scope: Deactivated successfully.
Nov 26 07:40:13 np0005536586 podman[103326]: 2025-11-26 12:40:13.15915303 +0000 UTC m=+0.920516832 container died da7289a9b28b4496a2c89e9f5eef9ad9cb04bd2d11fee6e67007d6f955c7e0ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_wilson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:40:13 np0005536586 systemd[1]: var-lib-containers-storage-overlay-501435377abaece7c92bec312f4e3b34dcad36eea7783a720c22af11db9282c6-merged.mount: Deactivated successfully.
Nov 26 07:40:13 np0005536586 podman[103326]: 2025-11-26 12:40:13.19056379 +0000 UTC m=+0.951927591 container remove da7289a9b28b4496a2c89e9f5eef9ad9cb04bd2d11fee6e67007d6f955c7e0ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_wilson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:40:13 np0005536586 systemd[1]: libpod-conmon-da7289a9b28b4496a2c89e9f5eef9ad9cb04bd2d11fee6e67007d6f955c7e0ea.scope: Deactivated successfully.
Nov 26 07:40:13 np0005536586 podman[103509]: 2025-11-26 12:40:13.612900085 +0000 UTC m=+0.028164353 container create 79aa115b8c529c50df7c9990ac88f801505d44b068af189c4b56e37bf8893471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_montalcini, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:40:13 np0005536586 systemd[1]: Started libpod-conmon-79aa115b8c529c50df7c9990ac88f801505d44b068af189c4b56e37bf8893471.scope.
Nov 26 07:40:13 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:40:13 np0005536586 podman[103509]: 2025-11-26 12:40:13.658733032 +0000 UTC m=+0.073997319 container init 79aa115b8c529c50df7c9990ac88f801505d44b068af189c4b56e37bf8893471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:40:13 np0005536586 podman[103509]: 2025-11-26 12:40:13.663828003 +0000 UTC m=+0.079092270 container start 79aa115b8c529c50df7c9990ac88f801505d44b068af189c4b56e37bf8893471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:40:13 np0005536586 podman[103509]: 2025-11-26 12:40:13.665022433 +0000 UTC m=+0.080286730 container attach 79aa115b8c529c50df7c9990ac88f801505d44b068af189c4b56e37bf8893471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_montalcini, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:40:13 np0005536586 hopeful_montalcini[103523]: 167 167
Nov 26 07:40:13 np0005536586 systemd[1]: libpod-79aa115b8c529c50df7c9990ac88f801505d44b068af189c4b56e37bf8893471.scope: Deactivated successfully.
Nov 26 07:40:13 np0005536586 podman[103509]: 2025-11-26 12:40:13.666731033 +0000 UTC m=+0.081995300 container died 79aa115b8c529c50df7c9990ac88f801505d44b068af189c4b56e37bf8893471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_montalcini, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:40:13 np0005536586 systemd[1]: var-lib-containers-storage-overlay-9c99d2e1c82d6119e7274f186ef7a82b01d388488764159b8e8aa688894bd237-merged.mount: Deactivated successfully.
Nov 26 07:40:13 np0005536586 podman[103509]: 2025-11-26 12:40:13.684519463 +0000 UTC m=+0.099783730 container remove 79aa115b8c529c50df7c9990ac88f801505d44b068af189c4b56e37bf8893471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_montalcini, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:40:13 np0005536586 podman[103509]: 2025-11-26 12:40:13.601787796 +0000 UTC m=+0.017052083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:40:13 np0005536586 systemd[1]: libpod-conmon-79aa115b8c529c50df7c9990ac88f801505d44b068af189c4b56e37bf8893471.scope: Deactivated successfully.
Nov 26 07:40:13 np0005536586 podman[103545]: 2025-11-26 12:40:13.797813711 +0000 UTC m=+0.028601157 container create 46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 07:40:13 np0005536586 systemd[1]: Started libpod-conmon-46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1.scope.
Nov 26 07:40:13 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:40:13 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66685294b122e5575da20a2082c3357e7851b4f3a43ea9830e75d5ff1d554329/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:40:13 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66685294b122e5575da20a2082c3357e7851b4f3a43ea9830e75d5ff1d554329/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:40:13 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66685294b122e5575da20a2082c3357e7851b4f3a43ea9830e75d5ff1d554329/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:40:13 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66685294b122e5575da20a2082c3357e7851b4f3a43ea9830e75d5ff1d554329/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:40:13 np0005536586 podman[103545]: 2025-11-26 12:40:13.849051231 +0000 UTC m=+0.079838667 container init 46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kapitsa, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 07:40:13 np0005536586 podman[103545]: 2025-11-26 12:40:13.853729157 +0000 UTC m=+0.084516592 container start 46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 07:40:13 np0005536586 podman[103545]: 2025-11-26 12:40:13.85519624 +0000 UTC m=+0.085983676 container attach 46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kapitsa, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 07:40:13 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v140: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Nov 26 07:40:13 np0005536586 podman[103545]: 2025-11-26 12:40:13.786205176 +0000 UTC m=+0.016992612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:40:13 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 26 07:40:13 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 26 07:40:13 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 26 07:40:13 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=70/71 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:13 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=70/71 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]: {
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:    "0": [
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:        {
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "devices": [
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "/dev/loop3"
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            ],
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "lv_name": "ceph_lv0",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "lv_size": "21470642176",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "name": "ceph_lv0",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "tags": {
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.cluster_name": "ceph",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.crush_device_class": "",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.encrypted": "0",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.osd_id": "0",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.type": "block",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.vdo": "0"
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            },
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "type": "block",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "vg_name": "ceph_vg0"
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:        }
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:    ],
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:    "1": [
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:        {
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "devices": [
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "/dev/loop4"
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            ],
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "lv_name": "ceph_lv1",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "lv_size": "21470642176",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "name": "ceph_lv1",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "tags": {
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.cluster_name": "ceph",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.crush_device_class": "",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.encrypted": "0",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.osd_id": "1",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.type": "block",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.vdo": "0"
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            },
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "type": "block",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "vg_name": "ceph_vg1"
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:        }
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:    ],
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:    "2": [
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:        {
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "devices": [
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "/dev/loop5"
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            ],
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "lv_name": "ceph_lv2",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "lv_size": "21470642176",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "name": "ceph_lv2",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "tags": {
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.cluster_name": "ceph",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.crush_device_class": "",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.encrypted": "0",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.osd_id": "2",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.type": "block",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:                "ceph.vdo": "0"
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            },
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "type": "block",
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:            "vg_name": "ceph_vg2"
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:        }
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]:    ]
Nov 26 07:40:14 np0005536586 nifty_kapitsa[103558]: }
Nov 26 07:40:14 np0005536586 systemd[1]: libpod-46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1.scope: Deactivated successfully.
Nov 26 07:40:14 np0005536586 conmon[103558]: conmon 46ba957521a473ce1f43 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1.scope/container/memory.events
Nov 26 07:40:14 np0005536586 podman[103567]: 2025-11-26 12:40:14.527795874 +0000 UTC m=+0.019008851 container died 46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:40:14 np0005536586 systemd[1]: var-lib-containers-storage-overlay-66685294b122e5575da20a2082c3357e7851b4f3a43ea9830e75d5ff1d554329-merged.mount: Deactivated successfully.
Nov 26 07:40:14 np0005536586 podman[103567]: 2025-11-26 12:40:14.56061605 +0000 UTC m=+0.051829026 container remove 46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:40:14 np0005536586 systemd[1]: libpod-conmon-46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1.scope: Deactivated successfully.
Nov 26 07:40:14 np0005536586 podman[103709]: 2025-11-26 12:40:14.976801265 +0000 UTC m=+0.026638889 container create 4d8f5c9d06df3171fa71768ba8880d2c2a9510e320b618eab330aa14a8247cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wilbur, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:40:15 np0005536586 systemd[1]: Started libpod-conmon-4d8f5c9d06df3171fa71768ba8880d2c2a9510e320b618eab330aa14a8247cb5.scope.
Nov 26 07:40:15 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:40:15 np0005536586 podman[103709]: 2025-11-26 12:40:15.024388346 +0000 UTC m=+0.074225980 container init 4d8f5c9d06df3171fa71768ba8880d2c2a9510e320b618eab330aa14a8247cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wilbur, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 07:40:15 np0005536586 podman[103709]: 2025-11-26 12:40:15.029401844 +0000 UTC m=+0.079239458 container start 4d8f5c9d06df3171fa71768ba8880d2c2a9510e320b618eab330aa14a8247cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wilbur, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 07:40:15 np0005536586 podman[103709]: 2025-11-26 12:40:15.030567851 +0000 UTC m=+0.080405485 container attach 4d8f5c9d06df3171fa71768ba8880d2c2a9510e320b618eab330aa14a8247cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wilbur, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:40:15 np0005536586 magical_wilbur[103722]: 167 167
Nov 26 07:40:15 np0005536586 systemd[1]: libpod-4d8f5c9d06df3171fa71768ba8880d2c2a9510e320b618eab330aa14a8247cb5.scope: Deactivated successfully.
Nov 26 07:40:15 np0005536586 podman[103709]: 2025-11-26 12:40:15.03309347 +0000 UTC m=+0.082931104 container died 4d8f5c9d06df3171fa71768ba8880d2c2a9510e320b618eab330aa14a8247cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wilbur, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:40:15 np0005536586 systemd[1]: var-lib-containers-storage-overlay-28011ddee321da2f9ba04817238e3ff4daf1c1aeeb8d3d38c897f5ef0262d66d-merged.mount: Deactivated successfully.
Nov 26 07:40:15 np0005536586 podman[103709]: 2025-11-26 12:40:15.05436296 +0000 UTC m=+0.104200574 container remove 4d8f5c9d06df3171fa71768ba8880d2c2a9510e320b618eab330aa14a8247cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wilbur, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 07:40:15 np0005536586 podman[103709]: 2025-11-26 12:40:14.966436033 +0000 UTC m=+0.016273667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:40:15 np0005536586 systemd[1]: libpod-conmon-4d8f5c9d06df3171fa71768ba8880d2c2a9510e320b618eab330aa14a8247cb5.scope: Deactivated successfully.
Nov 26 07:40:15 np0005536586 podman[103743]: 2025-11-26 12:40:15.170324844 +0000 UTC m=+0.028638376 container create d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:40:15 np0005536586 systemd[1]: Started libpod-conmon-d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515.scope.
Nov 26 07:40:15 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:40:15 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c703cd0701cff6d820cbb217a232cf7baf076ab2c233fe8b65d957325790e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:40:15 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c703cd0701cff6d820cbb217a232cf7baf076ab2c233fe8b65d957325790e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:40:15 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c703cd0701cff6d820cbb217a232cf7baf076ab2c233fe8b65d957325790e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:40:15 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c703cd0701cff6d820cbb217a232cf7baf076ab2c233fe8b65d957325790e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:40:15 np0005536586 podman[103743]: 2025-11-26 12:40:15.230000425 +0000 UTC m=+0.088313967 container init d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hopper, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:40:15 np0005536586 podman[103743]: 2025-11-26 12:40:15.235439815 +0000 UTC m=+0.093753347 container start d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:40:15 np0005536586 podman[103743]: 2025-11-26 12:40:15.236602936 +0000 UTC m=+0.094916469 container attach d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:40:15 np0005536586 podman[103743]: 2025-11-26 12:40:15.157926912 +0000 UTC m=+0.016240464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:40:15 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 26 07:40:15 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 26 07:40:15 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v142: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 44 B/s, 2 objects/s recovering
Nov 26 07:40:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:40:15 np0005536586 eager_hopper[103756]: {
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:        "osd_id": 1,
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:        "type": "bluestore"
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:    },
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:        "osd_id": 2,
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:        "type": "bluestore"
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:    },
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:        "osd_id": 0,
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:        "type": "bluestore"
Nov 26 07:40:15 np0005536586 eager_hopper[103756]:    }
Nov 26 07:40:15 np0005536586 eager_hopper[103756]: }
Nov 26 07:40:16 np0005536586 systemd[1]: libpod-d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515.scope: Deactivated successfully.
Nov 26 07:40:16 np0005536586 conmon[103756]: conmon d021722196b6bf946e8f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515.scope/container/memory.events
Nov 26 07:40:16 np0005536586 podman[103743]: 2025-11-26 12:40:16.013483735 +0000 UTC m=+0.871797267 container died d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 07:40:16 np0005536586 systemd[1]: var-lib-containers-storage-overlay-80c703cd0701cff6d820cbb217a232cf7baf076ab2c233fe8b65d957325790e9-merged.mount: Deactivated successfully.
Nov 26 07:40:16 np0005536586 podman[103743]: 2025-11-26 12:40:16.044090409 +0000 UTC m=+0.902403951 container remove d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 07:40:16 np0005536586 systemd[1]: libpod-conmon-d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515.scope: Deactivated successfully.
Nov 26 07:40:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:40:16 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:40:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:40:16 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:40:16 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev d990967c-022f-469c-8e75-cef71d9108fa does not exist
Nov 26 07:40:16 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev f758bd9f-19e9-42a7-b5ae-56d78cc625e6 does not exist
Nov 26 07:40:16 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 26 07:40:16 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 26 07:40:16 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 26 07:40:16 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 26 07:40:17 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:40:17 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:40:17 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v143: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 26 07:40:18 np0005536586 systemd-logind[777]: New session 34 of user zuul.
Nov 26 07:40:18 np0005536586 systemd[1]: Started Session 34 of User zuul.
Nov 26 07:40:18 np0005536586 python3.9[104001]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 26 07:40:19 np0005536586 python3.9[104175]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:40:19 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v144: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 26 07:40:19 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 26 07:40:19 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 26 07:40:19 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 26 07:40:19 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 26 07:40:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 26 07:40:20 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 26 07:40:20 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 26 07:40:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 26 07:40:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 26 07:40:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 26 07:40:20 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 26 07:40:20 np0005536586 python3.9[104331]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:40:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:40:21 np0005536586 python3.9[104484]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:40:21 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 26 07:40:21 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 26 07:40:21 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 72 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=11.746877670s) [0] r=-1 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 132.966613770s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:21 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:21 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 72 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=11.746830940s) [0] r=-1 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.966613770s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:21 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 26 07:40:21 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 26 07:40:21 np0005536586 python3.9[104638]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:40:21 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v146: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:40:22 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 26 07:40:22 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 26 07:40:22 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 26 07:40:22 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 73 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=72/73 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:22 np0005536586 python3.9[104790]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:40:22 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 26 07:40:22 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 26 07:40:22 np0005536586 python3.9[104940]: ansible-ansible.builtin.service_facts Invoked
Nov 26 07:40:22 np0005536586 network[104957]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 07:40:22 np0005536586 network[104958]: 'network-scripts' will be removed from distribution in near future.
Nov 26 07:40:22 np0005536586 network[104959]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 07:40:23 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v148: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:40:24 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 26 07:40:24 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 26 07:40:24 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 26 07:40:24 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 26 07:40:25 np0005536586 python3.9[105219]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:40:25 np0005536586 python3.9[105369]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:40:25 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1a deep-scrub starts
Nov 26 07:40:25 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1a deep-scrub ok
Nov 26 07:40:25 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:40:25 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:40:26 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 26 07:40:26 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 26 07:40:26 np0005536586 python3.9[105523]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:40:26 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.13 deep-scrub starts
Nov 26 07:40:26 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.13 deep-scrub ok
Nov 26 07:40:27 np0005536586 python3.9[105681]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:40:27 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 26 07:40:27 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 26 07:40:27 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v150: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:40:27 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 26 07:40:27 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 26 07:40:27 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 26 07:40:27 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 26 07:40:27 np0005536586 python3.9[105765]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:40:28 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 26 07:40:28 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 26 07:40:28 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 26 07:40:28 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 26 07:40:28 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 26 07:40:28 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 26 07:40:28 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 26 07:40:28 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 26 07:40:28 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 26 07:40:29 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 26 07:40:29 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 26 07:40:29 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 74 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=13.283830643s) [1] r=-1 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 37'39 active pruub 146.304275513s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:29 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 74 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=13.283709526s) [1] r=-1 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 146.304275513s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:29 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:29 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 26 07:40:29 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 26 07:40:29 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:40:29 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 26 07:40:29 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 26 07:40:29 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 26 07:40:29 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 26 07:40:30 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 26 07:40:30 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 26 07:40:30 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 26 07:40:30 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 26 07:40:30 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 26 07:40:30 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.842657089s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 140.926010132s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:30 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.842606544s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.926010132s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:30 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.843101501s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 140.926895142s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:30 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.843066216s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.926895142s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:30 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 26 07:40:30 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 26 07:40:30 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=74/75 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:30 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:30 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:30 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:40:30 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 26 07:40:30 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 26 07:40:30 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 26 07:40:30 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:30 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:30 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:30 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:30 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:30 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:30 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:30 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:31 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 26 07:40:31 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 26 07:40:31 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 26 07:40:31 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 26 07:40:31 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:40:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 26 07:40:31 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 26 07:40:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 26 07:40:31 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 26 07:40:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 26 07:40:31 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 26 07:40:31 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 26 07:40:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 26 07:40:31 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 26 07:40:32 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:32 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:32 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 26 07:40:32 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 26 07:40:32 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 26 07:40:32 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 26 07:40:32 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=14.681272507s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 37'39 active pruub 150.319351196s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:32 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=14.680977821s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 150.319351196s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:32 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:33 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 26 07:40:33 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 26 07:40:33 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 26 07:40:33 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:33 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:33 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:33 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:33 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.946485519s) [2] async=[2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 148.045059204s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:33 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.946434021s) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.045059204s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:33 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.944923401s) [2] async=[2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 148.043884277s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:33 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.944789886s) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.043884277s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:33 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=77/78 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:33 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 26 07:40:33 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 26 07:40:33 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 26 07:40:34 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 26 07:40:34 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 26 07:40:34 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 26 07:40:34 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:34 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:34 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.8 deep-scrub starts
Nov 26 07:40:34 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.8 deep-scrub ok
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:40:35
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Some PGs (0.006557) are inactive; try again later
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v160: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 3 objects/s recovering
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:40:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:40:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:40:36 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 26 07:40:36 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 26 07:40:36 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.5 deep-scrub starts
Nov 26 07:40:36 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.5 deep-scrub ok
Nov 26 07:40:37 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 26 07:40:37 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 26 07:40:37 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.18 deep-scrub starts
Nov 26 07:40:37 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.18 deep-scrub ok
Nov 26 07:40:37 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 2 objects/s recovering
Nov 26 07:40:38 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Nov 26 07:40:38 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Nov 26 07:40:39 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 1 objects/s recovering
Nov 26 07:40:39 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 26 07:40:39 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 26 07:40:39 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 26 07:40:39 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 26 07:40:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 26 07:40:40 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 26 07:40:40 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 26 07:40:40 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 26 07:40:40 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 26 07:40:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 26 07:40:40 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 26 07:40:40 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 26 07:40:40 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 26 07:40:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:40:41 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 26 07:40:41 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 26 07:40:41 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 26 07:40:41 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 26 07:40:41 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 26 07:40:41 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 26 07:40:41 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Nov 26 07:40:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 26 07:40:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 26 07:40:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 26 07:40:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 26 07:40:42 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 26 07:40:42 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 26 07:40:42 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 26 07:40:42 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 26 07:40:42 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 26 07:40:42 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 26 07:40:42 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 26 07:40:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81 pruub=8.856573105s) [2] r=-1 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 37'39 active pruub 154.304397583s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:42 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81 pruub=8.856448174s) [2] r=-1 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 154.304397583s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:42 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:43 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 26 07:40:43 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 26 07:40:43 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 26 07:40:43 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 26 07:40:43 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 26 07:40:43 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:43 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 26 07:40:43 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 26 07:40:43 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v167: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 0 objects/s recovering
Nov 26 07:40:43 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 26 07:40:43 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 26 07:40:44 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 26 07:40:44 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 26 07:40:44 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 26 07:40:44 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 26 07:40:44 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.359070782053787e-07 of space, bias 4.0, pg target 0.0007630884938464544 quantized to 16 (current 16)
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:40:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 07:40:45 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 26 07:40:45 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v169: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 106 B/s, 0 objects/s recovering
Nov 26 07:40:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 26 07:40:45 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 26 07:40:45 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 26 07:40:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:40:45 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 26 07:40:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 26 07:40:46 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 26 07:40:46 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 26 07:40:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 26 07:40:46 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 26 07:40:47 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 26 07:40:47 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 26 07:40:47 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 26 07:40:47 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 106 B/s, 0 objects/s recovering
Nov 26 07:40:47 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 26 07:40:47 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 26 07:40:47 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 26 07:40:47 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 26 07:40:48 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 26 07:40:48 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 26 07:40:48 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 26 07:40:48 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 26 07:40:48 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 26 07:40:48 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 26 07:40:48 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 26 07:40:49 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 26 07:40:49 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:40:49 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 26 07:40:49 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 26 07:40:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 26 07:40:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 26 07:40:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 26 07:40:50 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 26 07:40:50 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 26 07:40:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:40:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86 pruub=15.071921349s) [2] r=-1 lpr=86 pi=[53,86)/1 crt=44'389 mlcod 0'0 active pruub 169.294906616s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:50 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86 pruub=15.071560860s) [2] r=-1 lpr=86 pi=[53,86)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 169.294906616s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:50 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 26 07:40:51 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 26 07:40:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 26 07:40:51 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 26 07:40:51 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:51 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:51 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:51 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:51 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 26 07:40:51 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 26 07:40:51 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:40:52 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 26 07:40:52 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 26 07:40:52 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 26 07:40:52 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:52 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 26 07:40:52 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 26 07:40:52 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 26 07:40:52 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 26 07:40:53 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 26 07:40:53 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 26 07:40:53 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 26 07:40:53 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89 pruub=14.997830391s) [2] async=[2] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 44'389 active pruub 171.500854492s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:53 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89 pruub=14.997679710s) [2] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 171.500854492s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:40:53 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:40:53 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:40:53 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 26 07:40:54 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 26 07:40:54 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 26 07:40:54 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 26 07:40:54 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:40:55 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 26 07:40:55 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 1 objects/s recovering
Nov 26 07:40:55 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 26 07:40:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:40:56 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 26 07:40:56 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 26 07:40:57 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 26 07:40:57 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 26 07:40:57 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 26 07:40:57 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 26 07:40:57 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v182: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 26 07:40:57 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 26 07:40:57 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 26 07:40:58 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 26 07:40:58 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 26 07:40:59 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 26 07:40:59 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 26 07:40:59 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Nov 26 07:40:59 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 26 07:40:59 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 26 07:41:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 26 07:41:00 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 26 07:41:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 26 07:41:00 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 26 07:41:00 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 26 07:41:00 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 26 07:41:00 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 26 07:41:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:41:01 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 26 07:41:01 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 26 07:41:01 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 26 07:41:01 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:41:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 26 07:41:01 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 26 07:41:01 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 26 07:41:01 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 26 07:41:02 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 26 07:41:02 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 26 07:41:02 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 26 07:41:02 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 26 07:41:02 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 26 07:41:02 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 26 07:41:02 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 26 07:41:02 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 26 07:41:02 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 26 07:41:03 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 26 07:41:03 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v187: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:41:03 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 26 07:41:03 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 26 07:41:04 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 26 07:41:04 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 26 07:41:04 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 26 07:41:04 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 26 07:41:04 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=11.804219246s) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 active pruub 172.308624268s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:04 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=11.803787231s) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 172.308624268s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:41:04 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 26 07:41:04 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:41:04 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92 pruub=10.754682541s) [1] r=-1 lpr=92 pi=[54,92)/1 crt=44'389 mlcod 0'0 active pruub 178.302368164s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:04 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92 pruub=10.754209518s) [1] r=-1 lpr=92 pi=[54,92)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.302368164s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:41:04 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:41:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 26 07:41:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 26 07:41:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 26 07:41:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 26 07:41:05 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 26 07:41:05 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:05 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:05 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:41:05 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 26 07:41:05 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:41:05 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:05 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:41:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:41:05 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 26 07:41:05 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 26 07:41:05 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 26 07:41:05 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 26 07:41:05 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:41:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:41:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:41:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:41:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:41:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:41:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:41:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:41:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 26 07:41:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 26 07:41:06 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 26 07:41:06 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:41:06 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:41:07 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 26 07:41:07 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 26 07:41:07 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 26 07:41:07 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 26 07:41:07 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=14.996132851s) [0] async=[0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 44'389 active pruub 178.522598267s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:07 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=14.996060371s) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.522598267s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:41:07 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 26 07:41:07 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:07 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:41:07 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96 pruub=15.074978828s) [1] async=[1] r=-1 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 44'389 active pruub 185.642929077s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:07 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96 pruub=15.073821068s) [1] r=-1 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.642929077s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:41:07 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:07 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:41:07 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.d deep-scrub starts
Nov 26 07:41:07 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.d deep-scrub ok
Nov 26 07:41:07 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 26 07:41:07 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 26 07:41:07 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 26 07:41:08 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 26 07:41:08 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 26 07:41:08 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 26 07:41:08 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 26 07:41:08 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 26 07:41:08 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 26 07:41:08 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 26 07:41:08 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=96/97 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:41:08 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=96/97 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:41:08 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 26 07:41:08 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 26 07:41:09 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 26 07:41:09 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 7 op/s; 47 B/s, 1 objects/s recovering
Nov 26 07:41:09 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 26 07:41:09 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 26 07:41:10 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 26 07:41:10 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 26 07:41:10 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 26 07:41:10 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 26 07:41:10 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 26 07:41:10 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 26 07:41:10 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 26 07:41:10 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:41:11 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 26 07:41:11 np0005536586 python3.9[106067]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:41:11 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 26 07:41:11 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 26 07:41:11 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 6 op/s; 59 B/s, 2 objects/s recovering
Nov 26 07:41:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 26 07:41:11 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 26 07:41:11 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.c deep-scrub starts
Nov 26 07:41:11 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.c deep-scrub ok
Nov 26 07:41:12 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 26 07:41:12 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 26 07:41:12 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 26 07:41:12 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99 pruub=10.713154793s) [2] r=-1 lpr=99 pi=[54,99)/1 crt=44'389 mlcod 0'0 active pruub 186.304855347s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:12 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99 pruub=10.712953568s) [2] r=-1 lpr=99 pi=[54,99)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.304855347s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:41:12 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 26 07:41:12 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 26 07:41:12 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:41:12 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 26 07:41:12 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 26 07:41:13 np0005536586 python3.9[106354]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 26 07:41:13 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 26 07:41:13 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 26 07:41:13 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:13 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:41:13 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 26 07:41:13 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:13 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 26 07:41:13 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:41:13 np0005536586 python3.9[106506]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 26 07:41:13 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 26 07:41:13 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 26 07:41:13 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v200: 305 pgs: 305 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 0 objects/s recovering
Nov 26 07:41:13 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 26 07:41:13 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 26 07:41:14 np0005536586 python3.9[106658]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:41:14 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 26 07:41:14 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 26 07:41:14 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 26 07:41:14 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 26 07:41:14 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 26 07:41:14 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 26 07:41:14 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:41:14 np0005536586 python3.9[106810]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 26 07:41:14 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 26 07:41:14 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 26 07:41:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 26 07:41:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 26 07:41:15 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 26 07:41:15 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102 pruub=14.987218857s) [2] async=[2] r=-1 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 44'389 active pruub 193.604431152s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:15 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102 pruub=14.987159729s) [2] r=-1 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 193.604431152s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:41:15 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:15 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:41:15 np0005536586 python3.9[106962]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:41:15 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 26 07:41:15 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 26 07:41:15 np0005536586 python3.9[107114]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:41:15 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v203: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:41:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 26 07:41:15 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 26 07:41:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:41:16 np0005536586 python3.9[107192]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:41:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 26 07:41:16 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 26 07:41:16 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 26 07:41:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 26 07:41:16 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 26 07:41:16 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=102/103 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:41:16 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 26 07:41:16 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 26 07:41:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:41:16 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:41:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:41:16 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:41:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:41:16 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:41:16 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev e2c9ff1b-d3e7-4413-8903-771e192fb86f does not exist
Nov 26 07:41:16 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 29128d62-e193-4e59-9485-79ad11543262 does not exist
Nov 26 07:41:16 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev cb9697a3-f981-4b9d-aaa9-ace88bb0505f does not exist
Nov 26 07:41:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:41:16 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:41:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:41:16 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:41:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:41:16 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:41:16 np0005536586 python3.9[107498]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:41:17 np0005536586 podman[107632]: 2025-11-26 12:41:17.054450759 +0000 UTC m=+0.026248755 container create 19a21873e29f9ee6cd26a6a7ccbc74044092d0c9997fbe17f0c075b76c7d472a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_northcutt, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:41:17 np0005536586 systemd[1]: Started libpod-conmon-19a21873e29f9ee6cd26a6a7ccbc74044092d0c9997fbe17f0c075b76c7d472a.scope.
Nov 26 07:41:17 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:41:17 np0005536586 podman[107632]: 2025-11-26 12:41:17.107487871 +0000 UTC m=+0.079285877 container init 19a21873e29f9ee6cd26a6a7ccbc74044092d0c9997fbe17f0c075b76c7d472a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 07:41:17 np0005536586 podman[107632]: 2025-11-26 12:41:17.112854284 +0000 UTC m=+0.084652270 container start 19a21873e29f9ee6cd26a6a7ccbc74044092d0c9997fbe17f0c075b76c7d472a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_northcutt, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:41:17 np0005536586 podman[107632]: 2025-11-26 12:41:17.114808556 +0000 UTC m=+0.086606542 container attach 19a21873e29f9ee6cd26a6a7ccbc74044092d0c9997fbe17f0c075b76c7d472a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_northcutt, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:41:17 np0005536586 fervent_northcutt[107646]: 167 167
Nov 26 07:41:17 np0005536586 systemd[1]: libpod-19a21873e29f9ee6cd26a6a7ccbc74044092d0c9997fbe17f0c075b76c7d472a.scope: Deactivated successfully.
Nov 26 07:41:17 np0005536586 podman[107632]: 2025-11-26 12:41:17.116590634 +0000 UTC m=+0.088388620 container died 19a21873e29f9ee6cd26a6a7ccbc74044092d0c9997fbe17f0c075b76c7d472a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_northcutt, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 07:41:17 np0005536586 systemd[1]: var-lib-containers-storage-overlay-2c614989a51f8bb2acb31cc66200842ab564217f77878addca62143bd29b1a0c-merged.mount: Deactivated successfully.
Nov 26 07:41:17 np0005536586 podman[107632]: 2025-11-26 12:41:17.134671636 +0000 UTC m=+0.106469622 container remove 19a21873e29f9ee6cd26a6a7ccbc74044092d0c9997fbe17f0c075b76c7d472a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 07:41:17 np0005536586 podman[107632]: 2025-11-26 12:41:17.043658121 +0000 UTC m=+0.015456127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:41:17 np0005536586 systemd[1]: libpod-conmon-19a21873e29f9ee6cd26a6a7ccbc74044092d0c9997fbe17f0c075b76c7d472a.scope: Deactivated successfully.
Nov 26 07:41:17 np0005536586 podman[107692]: 2025-11-26 12:41:17.24610548 +0000 UTC m=+0.026290862 container create e0f0789b903427c9fe08088d6b80a9a1caf94d2d1cf0467d137020e2d72a134a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 07:41:17 np0005536586 systemd[1]: Started libpod-conmon-e0f0789b903427c9fe08088d6b80a9a1caf94d2d1cf0467d137020e2d72a134a.scope.
Nov 26 07:41:17 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:41:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f15432148fd04c5ebe4b597324599925c14f3790adeec5a2ed834ceb37c023/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:41:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f15432148fd04c5ebe4b597324599925c14f3790adeec5a2ed834ceb37c023/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:41:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f15432148fd04c5ebe4b597324599925c14f3790adeec5a2ed834ceb37c023/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:41:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f15432148fd04c5ebe4b597324599925c14f3790adeec5a2ed834ceb37c023/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:41:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f15432148fd04c5ebe4b597324599925c14f3790adeec5a2ed834ceb37c023/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:41:17 np0005536586 podman[107692]: 2025-11-26 12:41:17.305073439 +0000 UTC m=+0.085258841 container init e0f0789b903427c9fe08088d6b80a9a1caf94d2d1cf0467d137020e2d72a134a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_haibt, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 07:41:17 np0005536586 podman[107692]: 2025-11-26 12:41:17.309576104 +0000 UTC m=+0.089761486 container start e0f0789b903427c9fe08088d6b80a9a1caf94d2d1cf0467d137020e2d72a134a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 07:41:17 np0005536586 podman[107692]: 2025-11-26 12:41:17.310803135 +0000 UTC m=+0.090988518 container attach e0f0789b903427c9fe08088d6b80a9a1caf94d2d1cf0467d137020e2d72a134a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_haibt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 07:41:17 np0005536586 podman[107692]: 2025-11-26 12:41:17.234714355 +0000 UTC m=+0.014899758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:41:17 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 26 07:41:17 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:41:17 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:41:17 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:41:17 np0005536586 python3.9[107814]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 26 07:41:17 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:41:17 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 26 07:41:17 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 26 07:41:18 np0005536586 python3.9[107978]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 26 07:41:18 np0005536586 modest_haibt[107734]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:41:18 np0005536586 modest_haibt[107734]: --> relative data size: 1.0
Nov 26 07:41:18 np0005536586 modest_haibt[107734]: --> All data devices are unavailable
Nov 26 07:41:18 np0005536586 systemd[1]: libpod-e0f0789b903427c9fe08088d6b80a9a1caf94d2d1cf0467d137020e2d72a134a.scope: Deactivated successfully.
Nov 26 07:41:18 np0005536586 podman[107692]: 2025-11-26 12:41:18.157646178 +0000 UTC m=+0.937831570 container died e0f0789b903427c9fe08088d6b80a9a1caf94d2d1cf0467d137020e2d72a134a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_haibt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:41:18 np0005536586 systemd[1]: var-lib-containers-storage-overlay-50f15432148fd04c5ebe4b597324599925c14f3790adeec5a2ed834ceb37c023-merged.mount: Deactivated successfully.
Nov 26 07:41:18 np0005536586 podman[107692]: 2025-11-26 12:41:18.195613252 +0000 UTC m=+0.975798634 container remove e0f0789b903427c9fe08088d6b80a9a1caf94d2d1cf0467d137020e2d72a134a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_haibt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 07:41:18 np0005536586 systemd[1]: libpod-conmon-e0f0789b903427c9fe08088d6b80a9a1caf94d2d1cf0467d137020e2d72a134a.scope: Deactivated successfully.
Nov 26 07:41:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 26 07:41:18 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 26 07:41:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 26 07:41:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 26 07:41:18 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 26 07:41:18 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=11.659957886s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 active pruub 186.385116577s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:18 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=11.659915924s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.385116577s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:41:18 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:41:18 np0005536586 podman[108286]: 2025-11-26 12:41:18.626020106 +0000 UTC m=+0.032523038 container create 23d401442ae3f0153df2411bbd7b00faffb3011bf445544be7392fe72102f9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curie, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 07:41:18 np0005536586 systemd[1]: Started libpod-conmon-23d401442ae3f0153df2411bbd7b00faffb3011bf445544be7392fe72102f9c1.scope.
Nov 26 07:41:18 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:41:18 np0005536586 podman[108286]: 2025-11-26 12:41:18.673367027 +0000 UTC m=+0.079869978 container init 23d401442ae3f0153df2411bbd7b00faffb3011bf445544be7392fe72102f9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curie, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:41:18 np0005536586 podman[108286]: 2025-11-26 12:41:18.678815503 +0000 UTC m=+0.085318435 container start 23d401442ae3f0153df2411bbd7b00faffb3011bf445544be7392fe72102f9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curie, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:41:18 np0005536586 podman[108286]: 2025-11-26 12:41:18.680064417 +0000 UTC m=+0.086567348 container attach 23d401442ae3f0153df2411bbd7b00faffb3011bf445544be7392fe72102f9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 07:41:18 np0005536586 naughty_curie[108299]: 167 167
Nov 26 07:41:18 np0005536586 systemd[1]: libpod-23d401442ae3f0153df2411bbd7b00faffb3011bf445544be7392fe72102f9c1.scope: Deactivated successfully.
Nov 26 07:41:18 np0005536586 podman[108286]: 2025-11-26 12:41:18.682957018 +0000 UTC m=+0.089459949 container died 23d401442ae3f0153df2411bbd7b00faffb3011bf445544be7392fe72102f9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curie, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:41:18 np0005536586 systemd[1]: var-lib-containers-storage-overlay-1f4944a1844b2c551686814df7bb754851a80e8ba42b2672e9a87b4c2a92b18c-merged.mount: Deactivated successfully.
Nov 26 07:41:18 np0005536586 podman[108286]: 2025-11-26 12:41:18.702270242 +0000 UTC m=+0.108773173 container remove 23d401442ae3f0153df2411bbd7b00faffb3011bf445544be7392fe72102f9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:41:18 np0005536586 podman[108286]: 2025-11-26 12:41:18.613130056 +0000 UTC m=+0.019632988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:41:18 np0005536586 systemd[1]: libpod-conmon-23d401442ae3f0153df2411bbd7b00faffb3011bf445544be7392fe72102f9c1.scope: Deactivated successfully.
Nov 26 07:41:18 np0005536586 python3.9[108285]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 07:41:18 np0005536586 podman[108336]: 2025-11-26 12:41:18.82150706 +0000 UTC m=+0.029598696 container create 654571fcb0121c37f841357dcd55418cdc664e1d010677098674d765faac9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 07:41:18 np0005536586 systemd[1]: Started libpod-conmon-654571fcb0121c37f841357dcd55418cdc664e1d010677098674d765faac9525.scope.
Nov 26 07:41:18 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:41:18 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2593a2f25bb4f82233cc1d0baa71197adf70eed40ebd25ef32fc493d5e043c51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:41:18 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2593a2f25bb4f82233cc1d0baa71197adf70eed40ebd25ef32fc493d5e043c51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:41:18 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2593a2f25bb4f82233cc1d0baa71197adf70eed40ebd25ef32fc493d5e043c51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:41:18 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2593a2f25bb4f82233cc1d0baa71197adf70eed40ebd25ef32fc493d5e043c51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:41:18 np0005536586 podman[108336]: 2025-11-26 12:41:18.880165836 +0000 UTC m=+0.088257483 container init 654571fcb0121c37f841357dcd55418cdc664e1d010677098674d765faac9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:41:18 np0005536586 podman[108336]: 2025-11-26 12:41:18.885820782 +0000 UTC m=+0.093912418 container start 654571fcb0121c37f841357dcd55418cdc664e1d010677098674d765faac9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:41:18 np0005536586 podman[108336]: 2025-11-26 12:41:18.887063984 +0000 UTC m=+0.095155641 container attach 654571fcb0121c37f841357dcd55418cdc664e1d010677098674d765faac9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:41:18 np0005536586 podman[108336]: 2025-11-26 12:41:18.808697723 +0000 UTC m=+0.016789379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:41:18 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 26 07:41:19 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 26 07:41:19 np0005536586 python3.9[108491]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 26 07:41:19 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 26 07:41:19 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 26 07:41:19 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 26 07:41:19 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 26 07:41:19 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:19 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:41:19 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:19 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]: {
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:    "0": [
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:        {
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "devices": [
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "/dev/loop3"
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            ],
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "lv_name": "ceph_lv0",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "lv_size": "21470642176",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "name": "ceph_lv0",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "tags": {
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.cluster_name": "ceph",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.crush_device_class": "",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.encrypted": "0",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.osd_id": "0",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.type": "block",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.vdo": "0"
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            },
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "type": "block",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "vg_name": "ceph_vg0"
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:        }
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:    ],
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:    "1": [
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:        {
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "devices": [
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "/dev/loop4"
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            ],
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "lv_name": "ceph_lv1",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "lv_size": "21470642176",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "name": "ceph_lv1",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "tags": {
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.cluster_name": "ceph",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.crush_device_class": "",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.encrypted": "0",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.osd_id": "1",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.type": "block",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.vdo": "0"
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            },
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "type": "block",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "vg_name": "ceph_vg1"
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:        }
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:    ],
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:    "2": [
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:        {
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "devices": [
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "/dev/loop5"
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            ],
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "lv_name": "ceph_lv2",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "lv_size": "21470642176",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "name": "ceph_lv2",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "tags": {
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.cluster_name": "ceph",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.crush_device_class": "",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.encrypted": "0",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.osd_id": "2",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.type": "block",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:                "ceph.vdo": "0"
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            },
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "type": "block",
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:            "vg_name": "ceph_vg2"
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:        }
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]:    ]
Nov 26 07:41:19 np0005536586 hopeful_cray[108359]: }
Nov 26 07:41:19 np0005536586 systemd[1]: libpod-654571fcb0121c37f841357dcd55418cdc664e1d010677098674d765faac9525.scope: Deactivated successfully.
Nov 26 07:41:19 np0005536586 podman[108336]: 2025-11-26 12:41:19.527618609 +0000 UTC m=+0.735710244 container died 654571fcb0121c37f841357dcd55418cdc664e1d010677098674d765faac9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 07:41:19 np0005536586 systemd[1]: var-lib-containers-storage-overlay-2593a2f25bb4f82233cc1d0baa71197adf70eed40ebd25ef32fc493d5e043c51-merged.mount: Deactivated successfully.
Nov 26 07:41:19 np0005536586 podman[108336]: 2025-11-26 12:41:19.561322881 +0000 UTC m=+0.769414517 container remove 654571fcb0121c37f841357dcd55418cdc664e1d010677098674d765faac9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 07:41:19 np0005536586 systemd[1]: libpod-conmon-654571fcb0121c37f841357dcd55418cdc664e1d010677098674d765faac9525.scope: Deactivated successfully.
Nov 26 07:41:19 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 26 07:41:19 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 26 07:41:19 np0005536586 python3.9[108672]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:41:19 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v208: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 1 objects/s recovering
Nov 26 07:41:19 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 26 07:41:19 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 26 07:41:19 np0005536586 podman[108790]: 2025-11-26 12:41:19.977166893 +0000 UTC m=+0.026906913 container create 64a4cba0a6cda44f0afc155dc698a5b1d5dcee3b53a7a59b99666424eea2ed95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:41:20 np0005536586 systemd[1]: Started libpod-conmon-64a4cba0a6cda44f0afc155dc698a5b1d5dcee3b53a7a59b99666424eea2ed95.scope.
Nov 26 07:41:20 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:41:20 np0005536586 podman[108790]: 2025-11-26 12:41:20.037530442 +0000 UTC m=+0.087270462 container init 64a4cba0a6cda44f0afc155dc698a5b1d5dcee3b53a7a59b99666424eea2ed95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:41:20 np0005536586 podman[108790]: 2025-11-26 12:41:20.042912483 +0000 UTC m=+0.092652504 container start 64a4cba0a6cda44f0afc155dc698a5b1d5dcee3b53a7a59b99666424eea2ed95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:41:20 np0005536586 podman[108790]: 2025-11-26 12:41:20.044695522 +0000 UTC m=+0.094435563 container attach 64a4cba0a6cda44f0afc155dc698a5b1d5dcee3b53a7a59b99666424eea2ed95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:41:20 np0005536586 musing_jones[108804]: 167 167
Nov 26 07:41:20 np0005536586 systemd[1]: libpod-64a4cba0a6cda44f0afc155dc698a5b1d5dcee3b53a7a59b99666424eea2ed95.scope: Deactivated successfully.
Nov 26 07:41:20 np0005536586 podman[108790]: 2025-11-26 12:41:20.04684836 +0000 UTC m=+0.096588380 container died 64a4cba0a6cda44f0afc155dc698a5b1d5dcee3b53a7a59b99666424eea2ed95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:41:20 np0005536586 systemd[1]: var-lib-containers-storage-overlay-2d2351c2551ad19a0050c83b4f1862446f8838f4c74b7608d75e9c71dd125d4b-merged.mount: Deactivated successfully.
Nov 26 07:41:20 np0005536586 podman[108790]: 2025-11-26 12:41:19.966258718 +0000 UTC m=+0.015998759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:41:20 np0005536586 podman[108790]: 2025-11-26 12:41:20.064818452 +0000 UTC m=+0.114558474 container remove 64a4cba0a6cda44f0afc155dc698a5b1d5dcee3b53a7a59b99666424eea2ed95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 07:41:20 np0005536586 systemd[1]: libpod-conmon-64a4cba0a6cda44f0afc155dc698a5b1d5dcee3b53a7a59b99666424eea2ed95.scope: Deactivated successfully.
Nov 26 07:41:20 np0005536586 podman[108826]: 2025-11-26 12:41:20.177664008 +0000 UTC m=+0.027491576 container create 1f496a0f11caa64d697edc226b55ed67554a3f329a370c90fc3c79d1e985b598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wescoff, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:41:20 np0005536586 systemd[1]: Started libpod-conmon-1f496a0f11caa64d697edc226b55ed67554a3f329a370c90fc3c79d1e985b598.scope.
Nov 26 07:41:20 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:41:20 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc1c2aaa9646df43c285a3a38ec390195e8f01bae8d926ab43951db51c6a02c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:41:20 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc1c2aaa9646df43c285a3a38ec390195e8f01bae8d926ab43951db51c6a02c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:41:20 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc1c2aaa9646df43c285a3a38ec390195e8f01bae8d926ab43951db51c6a02c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:41:20 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc1c2aaa9646df43c285a3a38ec390195e8f01bae8d926ab43951db51c6a02c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:41:20 np0005536586 podman[108826]: 2025-11-26 12:41:20.234216685 +0000 UTC m=+0.084044263 container init 1f496a0f11caa64d697edc226b55ed67554a3f329a370c90fc3c79d1e985b598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wescoff, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:41:20 np0005536586 podman[108826]: 2025-11-26 12:41:20.251099849 +0000 UTC m=+0.100927417 container start 1f496a0f11caa64d697edc226b55ed67554a3f329a370c90fc3c79d1e985b598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wescoff, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 07:41:20 np0005536586 podman[108826]: 2025-11-26 12:41:20.252343782 +0000 UTC m=+0.102171351 container attach 1f496a0f11caa64d697edc226b55ed67554a3f329a370c90fc3c79d1e985b598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 07:41:20 np0005536586 podman[108826]: 2025-11-26 12:41:20.16640985 +0000 UTC m=+0.016237439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:41:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 26 07:41:20 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 26 07:41:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 26 07:41:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 26 07:41:20 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 26 07:41:20 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:41:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:41:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 26 07:41:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 26 07:41:20 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:20 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:41:20 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 26 07:41:20 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.438099861s) [0] async=[0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 44'389 active pruub 192.607360840s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:20 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.438027382s) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 192.607360840s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]: {
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:        "osd_id": 1,
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:        "type": "bluestore"
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:    },
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:        "osd_id": 2,
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:        "type": "bluestore"
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:    },
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:        "osd_id": 0,
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:        "type": "bluestore"
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]:    }
Nov 26 07:41:20 np0005536586 inspiring_wescoff[108839]: }
Nov 26 07:41:21 np0005536586 systemd[1]: libpod-1f496a0f11caa64d697edc226b55ed67554a3f329a370c90fc3c79d1e985b598.scope: Deactivated successfully.
Nov 26 07:41:21 np0005536586 podman[108826]: 2025-11-26 12:41:21.013569849 +0000 UTC m=+0.863397417 container died 1f496a0f11caa64d697edc226b55ed67554a3f329a370c90fc3c79d1e985b598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wescoff, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 07:41:21 np0005536586 systemd[1]: var-lib-containers-storage-overlay-acc1c2aaa9646df43c285a3a38ec390195e8f01bae8d926ab43951db51c6a02c-merged.mount: Deactivated successfully.
Nov 26 07:41:21 np0005536586 podman[108826]: 2025-11-26 12:41:21.044108907 +0000 UTC m=+0.893936475 container remove 1f496a0f11caa64d697edc226b55ed67554a3f329a370c90fc3c79d1e985b598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 07:41:21 np0005536586 systemd[1]: libpod-conmon-1f496a0f11caa64d697edc226b55ed67554a3f329a370c90fc3c79d1e985b598.scope: Deactivated successfully.
Nov 26 07:41:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:41:21 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:41:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:41:21 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:41:21 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev fd71e5db-2569-4a61-8491-87400ce4a034 does not exist
Nov 26 07:41:21 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 5024ec4c-c025-4e48-aff8-f712fb69df67 does not exist
Nov 26 07:41:21 np0005536586 python3.9[109058]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:41:21 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 26 07:41:21 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:41:21 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:41:21 np0005536586 python3.9[109235]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:41:21 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 3 objects/s recovering
Nov 26 07:41:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 26 07:41:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 26 07:41:21 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 26 07:41:21 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=107/108 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:41:22 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 26 07:41:22 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 26 07:41:22 np0005536586 python3.9[109313]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:41:22 np0005536586 python3.9[109465]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:41:22 np0005536586 python3.9[109543]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:41:23 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 26 07:41:23 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 26 07:41:23 np0005536586 python3.9[109695]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:41:23 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 26 07:41:23 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 26 07:41:23 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 1 objects/s recovering
Nov 26 07:41:24 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 26 07:41:24 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 26 07:41:24 np0005536586 python3.9[109846]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:41:25 np0005536586 python3.9[109998]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 26 07:41:25 np0005536586 python3.9[110148]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:41:25 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v214: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 26 07:41:25 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:41:26 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 26 07:41:26 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 26 07:41:26 np0005536586 python3.9[110300]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:41:26 np0005536586 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 26 07:41:26 np0005536586 systemd[1]: tuned.service: Deactivated successfully.
Nov 26 07:41:26 np0005536586 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 26 07:41:26 np0005536586 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 26 07:41:27 np0005536586 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 26 07:41:27 np0005536586 python3.9[110462]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 26 07:41:27 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2 B/s, 0 objects/s recovering
Nov 26 07:41:27 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 26 07:41:27 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 26 07:41:27 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 26 07:41:27 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 26 07:41:27 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 26 07:41:27 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 26 07:41:27 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 26 07:41:28 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=11.622039795s) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 active pruub 196.307983398s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:28 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=11.621926308s) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 196.307983398s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:41:28 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:41:28 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 26 07:41:28 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 26 07:41:28 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 26 07:41:28 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 26 07:41:28 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:28 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:41:28 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:28 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:41:29 np0005536586 python3.9[110614]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:41:29 np0005536586 python3.9[110768]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:41:29 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 26 07:41:29 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 26 07:41:29 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:41:29 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 07:41:29 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 07:41:29 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 26 07:41:29 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 07:41:29 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 26 07:41:29 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=12.962300301s) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 active pruub 199.162155151s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:29 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=12.962102890s) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 199.162155151s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:41:29 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 26 07:41:29 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:41:29 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 07:41:30 np0005536586 systemd[1]: session-34.scope: Deactivated successfully.
Nov 26 07:41:30 np0005536586 systemd[1]: session-34.scope: Consumed 46.117s CPU time.
Nov 26 07:41:30 np0005536586 systemd-logind[777]: Session 34 logged out. Waiting for processes to exit.
Nov 26 07:41:30 np0005536586 systemd-logind[777]: Removed session 34.
Nov 26 07:41:30 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 26 07:41:30 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 26 07:41:30 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:41:30 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 26 07:41:30 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 26 07:41:30 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:41:30 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 26 07:41:30 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 26 07:41:30 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 26 07:41:30 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:30 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:41:30 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:30 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:41:30 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=15.156332970s) [0] async=[0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 44'389 active pruub 202.327163696s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:30 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=15.156107903s) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 202.327163696s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:41:30 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:30 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:41:30 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 07:41:31 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 26 07:41:31 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 26 07:41:31 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 26 07:41:31 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 26 07:41:31 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 26 07:41:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 26 07:41:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 26 07:41:31 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 26 07:41:31 np0005536586 ceph-osd[88362]: osd.0 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:41:32 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.b deep-scrub starts
Nov 26 07:41:32 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.b deep-scrub ok
Nov 26 07:41:32 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:41:32 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 26 07:41:32 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 26 07:41:32 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 26 07:41:33 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 26 07:41:33 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 26 07:41:33 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.470242500s) [1] async=[1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 44'389 active pruub 204.689910889s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:33 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.469997406s) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 204.689910889s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:41:33 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:41:33 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:41:33 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 26 07:41:34 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 26 07:41:34 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 26 07:41:34 np0005536586 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 26 07:41:34 np0005536586 ceph-osd[89328]: osd.1 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=114/115 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:41:35 np0005536586 systemd-logind[777]: New session 35 of user zuul.
Nov 26 07:41:35 np0005536586 systemd[1]: Started Session 35 of User zuul.
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:41:35
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 2 objects/s recovering
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:41:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:41:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:41:36 np0005536586 python3.9[110948]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:41:36 np0005536586 python3.9[111104]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 26 07:41:37 np0005536586 python3.9[111257]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:41:37 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 26 07:41:37 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 26 07:41:37 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 26 07:41:38 np0005536586 python3.9[111341]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 07:41:39 np0005536586 python3.9[111494]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:41:39 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 26 07:41:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:41:41 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 26 07:41:41 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 26 07:41:41 np0005536586 python3.9[111647]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 07:41:41 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 26 07:41:41 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 26 07:41:41 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Nov 26 07:41:41 np0005536586 python3.9[111800]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:41:42 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 26 07:41:42 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 26 07:41:42 np0005536586 python3.9[111952]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 26 07:41:42 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 26 07:41:42 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 26 07:41:43 np0005536586 python3.9[112102]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:41:43 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 26 07:41:43 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 26 07:41:43 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Nov 26 07:41:43 np0005536586 python3.9[112260]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:41:44 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 26 07:41:44 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:41:44 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 07:41:45 np0005536586 python3.9[112413]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:41:45 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 26 07:41:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:41:46 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.8 deep-scrub starts
Nov 26 07:41:46 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.8 deep-scrub ok
Nov 26 07:41:46 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.1 deep-scrub starts
Nov 26 07:41:46 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.1 deep-scrub ok
Nov 26 07:41:46 np0005536586 python3.9[112700]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 26 07:41:46 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 26 07:41:46 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 26 07:41:47 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 26 07:41:47 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 26 07:41:47 np0005536586 python3.9[112850]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:41:47 np0005536586 python3.9[113004]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:41:47 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:41:48 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 26 07:41:48 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 26 07:41:48 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 26 07:41:48 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 26 07:41:49 np0005536586 python3.9[113157]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:41:49 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 26 07:41:49 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 26 07:41:49 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:41:50 np0005536586 python3.9[113310]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:41:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:41:51 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 26 07:41:51 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 26 07:41:51 np0005536586 python3.9[113464]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 26 07:41:51 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 26 07:41:51 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 26 07:41:51 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:41:51 np0005536586 systemd-logind[777]: Session 35 logged out. Waiting for processes to exit.
Nov 26 07:41:51 np0005536586 systemd[1]: session-35.scope: Deactivated successfully.
Nov 26 07:41:51 np0005536586 systemd[1]: session-35.scope: Consumed 12.764s CPU time.
Nov 26 07:41:51 np0005536586 systemd-logind[777]: Removed session 35.
Nov 26 07:41:53 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 26 07:41:53 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 26 07:41:53 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:41:54 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 26 07:41:54 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 26 07:41:54 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 26 07:41:54 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 26 07:41:55 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:41:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:41:56 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 26 07:41:56 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 26 07:41:56 np0005536586 systemd-logind[777]: New session 36 of user zuul.
Nov 26 07:41:56 np0005536586 systemd[1]: Started Session 36 of User zuul.
Nov 26 07:41:56 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 26 07:41:56 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 26 07:41:57 np0005536586 python3.9[113642]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:41:57 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:41:58 np0005536586 python3.9[113796]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:41:59 np0005536586 python3.9[113989]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:41:59 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 26 07:41:59 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 26 07:41:59 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 26 07:41:59 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 26 07:41:59 np0005536586 systemd[1]: session-36.scope: Deactivated successfully.
Nov 26 07:41:59 np0005536586 systemd[1]: session-36.scope: Consumed 1.947s CPU time.
Nov 26 07:41:59 np0005536586 systemd-logind[777]: Session 36 logged out. Waiting for processes to exit.
Nov 26 07:41:59 np0005536586 systemd-logind[777]: Removed session 36.
Nov 26 07:41:59 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:00 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.13 deep-scrub starts
Nov 26 07:42:00 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.13 deep-scrub ok
Nov 26 07:42:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:42:01 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 26 07:42:01 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 26 07:42:01 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:03 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 26 07:42:03 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 26 07:42:03 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 26 07:42:03 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 26 07:42:03 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:04 np0005536586 systemd-logind[777]: New session 37 of user zuul.
Nov 26 07:42:04 np0005536586 systemd[1]: Started Session 37 of User zuul.
Nov 26 07:42:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 26 07:42:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 26 07:42:05 np0005536586 python3.9[114168]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:42:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:42:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:42:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:42:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:42:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:42:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:42:05 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:42:06 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 26 07:42:06 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 26 07:42:06 np0005536586 python3.9[114322]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:42:06 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 26 07:42:06 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 26 07:42:06 np0005536586 python3.9[114478]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:42:07 np0005536586 python3.9[114562]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:42:07 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 26 07:42:07 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 26 07:42:07 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:08 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 26 07:42:08 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 26 07:42:08 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 26 07:42:08 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 26 07:42:09 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 26 07:42:09 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 26 07:42:09 np0005536586 python3.9[114715]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:42:09 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:09 np0005536586 python3.9[114910]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:10 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.d deep-scrub starts
Nov 26 07:42:10 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.d deep-scrub ok
Nov 26 07:42:10 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.1c deep-scrub starts
Nov 26 07:42:10 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.1c deep-scrub ok
Nov 26 07:42:10 np0005536586 python3.9[115062]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:42:10 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:42:11 np0005536586 python3.9[115223]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:42:11 np0005536586 python3.9[115301]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:11 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.a deep-scrub starts
Nov 26 07:42:11 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.a deep-scrub ok
Nov 26 07:42:11 np0005536586 python3.9[115453]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:42:11 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:12 np0005536586 python3.9[115531]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:42:12 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.2 deep-scrub starts
Nov 26 07:42:12 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.2 deep-scrub ok
Nov 26 07:42:12 np0005536586 python3.9[115683]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:42:13 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 26 07:42:13 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 26 07:42:13 np0005536586 python3.9[115835]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:42:13 np0005536586 python3.9[115987]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:42:13 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:14 np0005536586 python3.9[116139]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:42:14 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 26 07:42:14 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 26 07:42:14 np0005536586 python3.9[116291]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:42:15 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Nov 26 07:42:15 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Nov 26 07:42:15 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:42:16 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Nov 26 07:42:16 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Nov 26 07:42:16 np0005536586 python3.9[116444]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:42:16 np0005536586 python3.9[116598]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:42:17 np0005536586 python3.9[116750]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:42:17 np0005536586 python3.9[116902]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:42:17 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:18 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 26 07:42:18 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 26 07:42:18 np0005536586 python3.9[117055]: ansible-service_facts Invoked
Nov 26 07:42:18 np0005536586 network[117072]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 07:42:18 np0005536586 network[117073]: 'network-scripts' will be removed from distribution in near future.
Nov 26 07:42:18 np0005536586 network[117074]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 07:42:19 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 26 07:42:19 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 26 07:42:19 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:20 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 26 07:42:20 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 26 07:42:20 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.5 deep-scrub starts
Nov 26 07:42:20 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.5 deep-scrub ok
Nov 26 07:42:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:42:21 np0005536586 python3.9[117549]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:42:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:42:21 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:42:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:42:21 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:42:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:42:21 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:42:21 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 702f5c97-136a-4d01-98ff-d37ac22665e4 does not exist
Nov 26 07:42:21 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 730d1a08-54ec-4750-9a72-7a5eb86e42c8 does not exist
Nov 26 07:42:21 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 7ae52e54-2f0e-4949-bc1f-36d50358ff46 does not exist
Nov 26 07:42:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:42:21 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:42:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:42:21 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:42:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:42:21 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:42:21 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:22 np0005536586 podman[117788]: 2025-11-26 12:42:22.068994181 +0000 UTC m=+0.027985860 container create 13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_matsumoto, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:42:22 np0005536586 systemd[1]: Started libpod-conmon-13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82.scope.
Nov 26 07:42:22 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:42:22 np0005536586 podman[117788]: 2025-11-26 12:42:22.122092594 +0000 UTC m=+0.081084283 container init 13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_matsumoto, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 07:42:22 np0005536586 podman[117788]: 2025-11-26 12:42:22.126456789 +0000 UTC m=+0.085448458 container start 13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:42:22 np0005536586 podman[117788]: 2025-11-26 12:42:22.127523769 +0000 UTC m=+0.086515439 container attach 13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 07:42:22 np0005536586 magical_matsumoto[117801]: 167 167
Nov 26 07:42:22 np0005536586 systemd[1]: libpod-13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82.scope: Deactivated successfully.
Nov 26 07:42:22 np0005536586 conmon[117801]: conmon 13638a4c1a17906637aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82.scope/container/memory.events
Nov 26 07:42:22 np0005536586 podman[117788]: 2025-11-26 12:42:22.131176332 +0000 UTC m=+0.090168001 container died 13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:42:22 np0005536586 systemd[1]: var-lib-containers-storage-overlay-9438a59dba94669e26cf8f4704fad680d3b92e38f098fb69a0147e89db505a4a-merged.mount: Deactivated successfully.
Nov 26 07:42:22 np0005536586 podman[117788]: 2025-11-26 12:42:22.153338216 +0000 UTC m=+0.112329885 container remove 13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 07:42:22 np0005536586 podman[117788]: 2025-11-26 12:42:22.057780308 +0000 UTC m=+0.016771998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:42:22 np0005536586 systemd[1]: libpod-conmon-13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82.scope: Deactivated successfully.
Nov 26 07:42:22 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:42:22 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:42:22 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:42:22 np0005536586 podman[117823]: 2025-11-26 12:42:22.265247096 +0000 UTC m=+0.027915798 container create 3b8fb9cac998b95baf23837c8fc81a0810f0ff0876406881b64fdc8ba8577143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_boyd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:42:22 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 26 07:42:22 np0005536586 systemd[1]: Started libpod-conmon-3b8fb9cac998b95baf23837c8fc81a0810f0ff0876406881b64fdc8ba8577143.scope.
Nov 26 07:42:22 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 26 07:42:22 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:42:22 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca6a878efe8a2b85eae66dbeca18d899d4c919ab2d0a706ac9385a940c3f64a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:42:22 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca6a878efe8a2b85eae66dbeca18d899d4c919ab2d0a706ac9385a940c3f64a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:42:22 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca6a878efe8a2b85eae66dbeca18d899d4c919ab2d0a706ac9385a940c3f64a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:42:22 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca6a878efe8a2b85eae66dbeca18d899d4c919ab2d0a706ac9385a940c3f64a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:42:22 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca6a878efe8a2b85eae66dbeca18d899d4c919ab2d0a706ac9385a940c3f64a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:42:22 np0005536586 podman[117823]: 2025-11-26 12:42:22.311391886 +0000 UTC m=+0.074060586 container init 3b8fb9cac998b95baf23837c8fc81a0810f0ff0876406881b64fdc8ba8577143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_boyd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 26 07:42:22 np0005536586 podman[117823]: 2025-11-26 12:42:22.318343305 +0000 UTC m=+0.081012006 container start 3b8fb9cac998b95baf23837c8fc81a0810f0ff0876406881b64fdc8ba8577143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_boyd, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:42:22 np0005536586 podman[117823]: 2025-11-26 12:42:22.319334934 +0000 UTC m=+0.082003634 container attach 3b8fb9cac998b95baf23837c8fc81a0810f0ff0876406881b64fdc8ba8577143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_boyd, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:42:22 np0005536586 podman[117823]: 2025-11-26 12:42:22.253094275 +0000 UTC m=+0.015762976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:42:22 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 26 07:42:22 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 26 07:42:23 np0005536586 optimistic_boyd[117836]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:42:23 np0005536586 optimistic_boyd[117836]: --> relative data size: 1.0
Nov 26 07:42:23 np0005536586 optimistic_boyd[117836]: --> All data devices are unavailable
Nov 26 07:42:23 np0005536586 systemd[1]: libpod-3b8fb9cac998b95baf23837c8fc81a0810f0ff0876406881b64fdc8ba8577143.scope: Deactivated successfully.
Nov 26 07:42:23 np0005536586 podman[117823]: 2025-11-26 12:42:23.127673053 +0000 UTC m=+0.890341755 container died 3b8fb9cac998b95baf23837c8fc81a0810f0ff0876406881b64fdc8ba8577143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_boyd, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:42:23 np0005536586 systemd[1]: var-lib-containers-storage-overlay-aca6a878efe8a2b85eae66dbeca18d899d4c919ab2d0a706ac9385a940c3f64a-merged.mount: Deactivated successfully.
Nov 26 07:42:23 np0005536586 podman[117823]: 2025-11-26 12:42:23.162231147 +0000 UTC m=+0.924899848 container remove 3b8fb9cac998b95baf23837c8fc81a0810f0ff0876406881b64fdc8ba8577143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_boyd, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:42:23 np0005536586 systemd[1]: libpod-conmon-3b8fb9cac998b95baf23837c8fc81a0810f0ff0876406881b64fdc8ba8577143.scope: Deactivated successfully.
Nov 26 07:42:23 np0005536586 python3.9[118016]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 26 07:42:23 np0005536586 podman[118160]: 2025-11-26 12:42:23.588696653 +0000 UTC m=+0.031464745 container create 26c23ef4d6b0b2476e3b361d0c123eed3e7a01b00e3ae58733571fd2060567dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 07:42:23 np0005536586 systemd[1]: Started libpod-conmon-26c23ef4d6b0b2476e3b361d0c123eed3e7a01b00e3ae58733571fd2060567dc.scope.
Nov 26 07:42:23 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:42:23 np0005536586 podman[118160]: 2025-11-26 12:42:23.646545889 +0000 UTC m=+0.089313981 container init 26c23ef4d6b0b2476e3b361d0c123eed3e7a01b00e3ae58733571fd2060567dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldstine, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:42:23 np0005536586 podman[118160]: 2025-11-26 12:42:23.650816207 +0000 UTC m=+0.093584289 container start 26c23ef4d6b0b2476e3b361d0c123eed3e7a01b00e3ae58733571fd2060567dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldstine, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 07:42:23 np0005536586 podman[118160]: 2025-11-26 12:42:23.652212227 +0000 UTC m=+0.094980310 container attach 26c23ef4d6b0b2476e3b361d0c123eed3e7a01b00e3ae58733571fd2060567dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldstine, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:42:23 np0005536586 goofy_goldstine[118197]: 167 167
Nov 26 07:42:23 np0005536586 systemd[1]: libpod-26c23ef4d6b0b2476e3b361d0c123eed3e7a01b00e3ae58733571fd2060567dc.scope: Deactivated successfully.
Nov 26 07:42:23 np0005536586 podman[118160]: 2025-11-26 12:42:23.654540765 +0000 UTC m=+0.097308847 container died 26c23ef4d6b0b2476e3b361d0c123eed3e7a01b00e3ae58733571fd2060567dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:42:23 np0005536586 systemd[1]: var-lib-containers-storage-overlay-9a2e2f62a458a10aab5b1bfc42da666a4a878c953d29757c20d7c64b19bf5961-merged.mount: Deactivated successfully.
Nov 26 07:42:23 np0005536586 podman[118160]: 2025-11-26 12:42:23.574618254 +0000 UTC m=+0.017386356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:42:23 np0005536586 podman[118160]: 2025-11-26 12:42:23.672663086 +0000 UTC m=+0.115431169 container remove 26c23ef4d6b0b2476e3b361d0c123eed3e7a01b00e3ae58733571fd2060567dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldstine, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:42:23 np0005536586 systemd[1]: libpod-conmon-26c23ef4d6b0b2476e3b361d0c123eed3e7a01b00e3ae58733571fd2060567dc.scope: Deactivated successfully.
Nov 26 07:42:23 np0005536586 podman[118219]: 2025-11-26 12:42:23.786440348 +0000 UTC m=+0.029062608 container create 1e68bea15a2311f4cf64096dbeb375f9224566c4f97e2e46105534a6e3ee6555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:42:23 np0005536586 systemd[1]: Started libpod-conmon-1e68bea15a2311f4cf64096dbeb375f9224566c4f97e2e46105534a6e3ee6555.scope.
Nov 26 07:42:23 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:42:23 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49962f512c783eb33cc567d1a572b6d18584e531ddde4865c51f1c3b953247ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:42:23 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49962f512c783eb33cc567d1a572b6d18584e531ddde4865c51f1c3b953247ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:42:23 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49962f512c783eb33cc567d1a572b6d18584e531ddde4865c51f1c3b953247ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:42:23 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49962f512c783eb33cc567d1a572b6d18584e531ddde4865c51f1c3b953247ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:42:23 np0005536586 podman[118219]: 2025-11-26 12:42:23.844290635 +0000 UTC m=+0.086912916 container init 1e68bea15a2311f4cf64096dbeb375f9224566c4f97e2e46105534a6e3ee6555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_spence, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 26 07:42:23 np0005536586 podman[118219]: 2025-11-26 12:42:23.850066741 +0000 UTC m=+0.092689002 container start 1e68bea15a2311f4cf64096dbeb375f9224566c4f97e2e46105534a6e3ee6555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:42:23 np0005536586 podman[118219]: 2025-11-26 12:42:23.85120223 +0000 UTC m=+0.093824490 container attach 1e68bea15a2311f4cf64096dbeb375f9224566c4f97e2e46105534a6e3ee6555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 07:42:23 np0005536586 podman[118219]: 2025-11-26 12:42:23.774921892 +0000 UTC m=+0.017544173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:42:23 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:24 np0005536586 python3.9[118364]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:42:24 np0005536586 reverent_spence[118232]: {
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:    "0": [
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:        {
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "devices": [
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "/dev/loop3"
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            ],
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "lv_name": "ceph_lv0",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "lv_size": "21470642176",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "name": "ceph_lv0",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "tags": {
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.cluster_name": "ceph",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.crush_device_class": "",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.encrypted": "0",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.osd_id": "0",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.type": "block",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.vdo": "0"
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            },
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "type": "block",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "vg_name": "ceph_vg0"
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:        }
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:    ],
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:    "1": [
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:        {
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "devices": [
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "/dev/loop4"
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            ],
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "lv_name": "ceph_lv1",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "lv_size": "21470642176",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "name": "ceph_lv1",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "tags": {
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.cluster_name": "ceph",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.crush_device_class": "",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.encrypted": "0",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.osd_id": "1",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.type": "block",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.vdo": "0"
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            },
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "type": "block",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "vg_name": "ceph_vg1"
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:        }
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:    ],
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:    "2": [
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:        {
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "devices": [
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "/dev/loop5"
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            ],
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "lv_name": "ceph_lv2",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "lv_size": "21470642176",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "name": "ceph_lv2",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "tags": {
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.cluster_name": "ceph",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.crush_device_class": "",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.encrypted": "0",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.osd_id": "2",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.type": "block",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:                "ceph.vdo": "0"
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            },
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "type": "block",
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:            "vg_name": "ceph_vg2"
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:        }
Nov 26 07:42:24 np0005536586 reverent_spence[118232]:    ]
Nov 26 07:42:24 np0005536586 reverent_spence[118232]: }
Nov 26 07:42:24 np0005536586 systemd[1]: libpod-1e68bea15a2311f4cf64096dbeb375f9224566c4f97e2e46105534a6e3ee6555.scope: Deactivated successfully.
Nov 26 07:42:24 np0005536586 podman[118219]: 2025-11-26 12:42:24.519147261 +0000 UTC m=+0.761769522 container died 1e68bea15a2311f4cf64096dbeb375f9224566c4f97e2e46105534a6e3ee6555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_spence, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 07:42:24 np0005536586 systemd[1]: var-lib-containers-storage-overlay-49962f512c783eb33cc567d1a572b6d18584e531ddde4865c51f1c3b953247ee-merged.mount: Deactivated successfully.
Nov 26 07:42:24 np0005536586 python3.9[118442]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:24 np0005536586 podman[118219]: 2025-11-26 12:42:24.54954242 +0000 UTC m=+0.792164681 container remove 1e68bea15a2311f4cf64096dbeb375f9224566c4f97e2e46105534a6e3ee6555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:42:24 np0005536586 systemd[1]: libpod-conmon-1e68bea15a2311f4cf64096dbeb375f9224566c4f97e2e46105534a6e3ee6555.scope: Deactivated successfully.
Nov 26 07:42:24 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 26 07:42:24 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 26 07:42:24 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 26 07:42:24 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 26 07:42:24 np0005536586 podman[118741]: 2025-11-26 12:42:24.987438538 +0000 UTC m=+0.027406869 container create 7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 07:42:25 np0005536586 systemd[1]: Started libpod-conmon-7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692.scope.
Nov 26 07:42:25 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:42:25 np0005536586 python3.9[118721]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:42:25 np0005536586 podman[118741]: 2025-11-26 12:42:25.042316855 +0000 UTC m=+0.082285196 container init 7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 07:42:25 np0005536586 podman[118741]: 2025-11-26 12:42:25.047704238 +0000 UTC m=+0.087672569 container start 7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 07:42:25 np0005536586 podman[118741]: 2025-11-26 12:42:25.048744298 +0000 UTC m=+0.088712629 container attach 7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:42:25 np0005536586 vigilant_curran[118754]: 167 167
Nov 26 07:42:25 np0005536586 systemd[1]: libpod-7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692.scope: Deactivated successfully.
Nov 26 07:42:25 np0005536586 conmon[118754]: conmon 7716cb19720cf415605a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692.scope/container/memory.events
Nov 26 07:42:25 np0005536586 podman[118741]: 2025-11-26 12:42:25.051825043 +0000 UTC m=+0.091793394 container died 7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:42:25 np0005536586 systemd[1]: var-lib-containers-storage-overlay-21a8e92cd3bf203fe401307c7a4ad9e76998a194433ede8f79f3f8c5247592ae-merged.mount: Deactivated successfully.
Nov 26 07:42:25 np0005536586 podman[118741]: 2025-11-26 12:42:24.976543977 +0000 UTC m=+0.016512318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:42:25 np0005536586 podman[118741]: 2025-11-26 12:42:25.076158689 +0000 UTC m=+0.116127021 container remove 7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 07:42:25 np0005536586 systemd[1]: libpod-conmon-7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692.scope: Deactivated successfully.
Nov 26 07:42:25 np0005536586 podman[118807]: 2025-11-26 12:42:25.193348391 +0000 UTC m=+0.032716164 container create ff2b6b86a16cc028690c34cd4198e8a89d8e4d72096e71852ba473b4d6091ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 07:42:25 np0005536586 systemd[1]: Started libpod-conmon-ff2b6b86a16cc028690c34cd4198e8a89d8e4d72096e71852ba473b4d6091ada.scope.
Nov 26 07:42:25 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:42:25 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155894c4bb4a346524e966d2a73f8640160aadea39ecbe71ed760b4cebc3c0db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:42:25 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155894c4bb4a346524e966d2a73f8640160aadea39ecbe71ed760b4cebc3c0db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:42:25 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155894c4bb4a346524e966d2a73f8640160aadea39ecbe71ed760b4cebc3c0db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:42:25 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155894c4bb4a346524e966d2a73f8640160aadea39ecbe71ed760b4cebc3c0db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:42:25 np0005536586 podman[118807]: 2025-11-26 12:42:25.248506976 +0000 UTC m=+0.087874748 container init ff2b6b86a16cc028690c34cd4198e8a89d8e4d72096e71852ba473b4d6091ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:42:25 np0005536586 podman[118807]: 2025-11-26 12:42:25.253470711 +0000 UTC m=+0.092838474 container start ff2b6b86a16cc028690c34cd4198e8a89d8e4d72096e71852ba473b4d6091ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:42:25 np0005536586 podman[118807]: 2025-11-26 12:42:25.254505251 +0000 UTC m=+0.093873023 container attach ff2b6b86a16cc028690c34cd4198e8a89d8e4d72096e71852ba473b4d6091ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:42:25 np0005536586 podman[118807]: 2025-11-26 12:42:25.181474775 +0000 UTC m=+0.020842558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:42:25 np0005536586 python3.9[118869]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:25 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:25 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:42:26 np0005536586 goofy_euler[118865]: {
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:        "osd_id": 1,
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:        "type": "bluestore"
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:    },
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:        "osd_id": 2,
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:        "type": "bluestore"
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:    },
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:        "osd_id": 0,
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:        "type": "bluestore"
Nov 26 07:42:26 np0005536586 goofy_euler[118865]:    }
Nov 26 07:42:26 np0005536586 goofy_euler[118865]: }
Nov 26 07:42:26 np0005536586 systemd[1]: libpod-ff2b6b86a16cc028690c34cd4198e8a89d8e4d72096e71852ba473b4d6091ada.scope: Deactivated successfully.
Nov 26 07:42:26 np0005536586 podman[119052]: 2025-11-26 12:42:26.060090743 +0000 UTC m=+0.018449580 container died ff2b6b86a16cc028690c34cd4198e8a89d8e4d72096e71852ba473b4d6091ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 07:42:26 np0005536586 systemd[1]: var-lib-containers-storage-overlay-155894c4bb4a346524e966d2a73f8640160aadea39ecbe71ed760b4cebc3c0db-merged.mount: Deactivated successfully.
Nov 26 07:42:26 np0005536586 podman[119052]: 2025-11-26 12:42:26.090099934 +0000 UTC m=+0.048458772 container remove ff2b6b86a16cc028690c34cd4198e8a89d8e4d72096e71852ba473b4d6091ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:42:26 np0005536586 systemd[1]: libpod-conmon-ff2b6b86a16cc028690c34cd4198e8a89d8e4d72096e71852ba473b4d6091ada.scope: Deactivated successfully.
Nov 26 07:42:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:42:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:42:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:42:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:42:26 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev b16754c2-ff89-4c11-b913-784d4d2be1d5 does not exist
Nov 26 07:42:26 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 0ef91a1d-aebe-4997-9bda-fbb14816b259 does not exist
Nov 26 07:42:26 np0005536586 python3.9[119049]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:26 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:42:26 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:42:26 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 26 07:42:26 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 26 07:42:26 np0005536586 python3.9[119265]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:42:27 np0005536586 python3.9[119349]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:42:27 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:28 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 26 07:42:28 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 26 07:42:28 np0005536586 systemd[1]: session-37.scope: Deactivated successfully.
Nov 26 07:42:28 np0005536586 systemd[1]: session-37.scope: Consumed 16.836s CPU time.
Nov 26 07:42:28 np0005536586 systemd-logind[777]: Session 37 logged out. Waiting for processes to exit.
Nov 26 07:42:28 np0005536586 systemd-logind[777]: Removed session 37.
Nov 26 07:42:28 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 26 07:42:28 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 26 07:42:29 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.15 deep-scrub starts
Nov 26 07:42:29 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.15 deep-scrub ok
Nov 26 07:42:29 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:30 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 26 07:42:30 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 26 07:42:30 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:42:31 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 26 07:42:31 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 26 07:42:31 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 26 07:42:31 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 26 07:42:31 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:32 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 26 07:42:32 np0005536586 systemd-logind[777]: New session 38 of user zuul.
Nov 26 07:42:32 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 26 07:42:32 np0005536586 systemd[1]: Started Session 38 of User zuul.
Nov 26 07:42:32 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 26 07:42:32 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 26 07:42:33 np0005536586 python3.9[119531]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:33 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:33 np0005536586 python3.9[119683]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:42:34 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 26 07:42:34 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 26 07:42:34 np0005536586 python3.9[119761]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:34 np0005536586 systemd[1]: session-38.scope: Deactivated successfully.
Nov 26 07:42:34 np0005536586 systemd[1]: session-38.scope: Consumed 1.100s CPU time.
Nov 26 07:42:34 np0005536586 systemd-logind[777]: Session 38 logged out. Waiting for processes to exit.
Nov 26 07:42:34 np0005536586 systemd-logind[777]: Removed session 38.
Nov 26 07:42:34 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 26 07:42:34 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:42:35
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] pools ['.mgr', 'images', 'vms', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data']
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:42:35 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:42:36 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 26 07:42:36 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 26 07:42:37 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 26 07:42:37 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 26 07:42:37 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:39 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:40 np0005536586 systemd-logind[777]: New session 39 of user zuul.
Nov 26 07:42:40 np0005536586 systemd[1]: Started Session 39 of User zuul.
Nov 26 07:42:40 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 26 07:42:40 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 26 07:42:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:42:40 np0005536586 python3.9[119939]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:42:41 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 26 07:42:41 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 26 07:42:41 np0005536586 python3.9[120095]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:41 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:42 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.6 deep-scrub starts
Nov 26 07:42:42 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.6 deep-scrub ok
Nov 26 07:42:42 np0005536586 python3.9[120270]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:42:42 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 26 07:42:42 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 26 07:42:42 np0005536586 python3.9[120348]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.yhz2wz8t recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:43 np0005536586 python3.9[120500]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:42:43 np0005536586 python3.9[120578]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.xkgjozvv recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:43 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 26 07:42:43 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 26 07:42:43 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:44 np0005536586 python3.9[120730]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:42:44 np0005536586 python3.9[120882]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:42:44 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 26 07:42:44 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 26 07:42:44 np0005536586 python3.9[120960]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:42:44 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 26 07:42:44 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 07:42:45 np0005536586 python3.9[121112]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:42:45 np0005536586 python3.9[121190]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:42:45 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:45 np0005536586 python3.9[121342]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:42:46 np0005536586 python3.9[121494]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:42:46 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 26 07:42:46 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 26 07:42:46 np0005536586 python3.9[121572]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:46 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 26 07:42:46 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 26 07:42:47 np0005536586 python3.9[121724]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:42:47 np0005536586 python3.9[121802]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:47 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 26 07:42:47 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 26 07:42:47 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 26 07:42:47 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 26 07:42:47 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:48 np0005536586 python3.9[121954]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:42:48 np0005536586 systemd[1]: Reloading.
Nov 26 07:42:48 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 26 07:42:48 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 26 07:42:48 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:42:48 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:42:48 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 26 07:42:48 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 26 07:42:49 np0005536586 python3.9[122143]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:42:49 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 26 07:42:49 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 26 07:42:49 np0005536586 python3.9[122221]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:49 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.13 deep-scrub starts
Nov 26 07:42:49 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.13 deep-scrub ok
Nov 26 07:42:49 np0005536586 python3.9[122373]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:42:49 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:50 np0005536586 python3.9[122451]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:50 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 26 07:42:50 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 26 07:42:50 np0005536586 python3.9[122603]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:42:50 np0005536586 systemd[1]: Reloading.
Nov 26 07:42:50 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:42:50 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:42:50 np0005536586 systemd[1]: Starting Create netns directory...
Nov 26 07:42:50 np0005536586 systemd[76457]: Created slice User Background Tasks Slice.
Nov 26 07:42:50 np0005536586 systemd[76457]: Starting Cleanup of User's Temporary Files and Directories...
Nov 26 07:42:50 np0005536586 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 07:42:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:42:50 np0005536586 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 07:42:50 np0005536586 systemd[1]: Finished Create netns directory.
Nov 26 07:42:50 np0005536586 systemd[76457]: Finished Cleanup of User's Temporary Files and Directories.
Nov 26 07:42:51 np0005536586 python3.9[122797]: ansible-ansible.builtin.service_facts Invoked
Nov 26 07:42:51 np0005536586 network[122814]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 07:42:51 np0005536586 network[122815]: 'network-scripts' will be removed from distribution in near future.
Nov 26 07:42:51 np0005536586 network[122816]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 07:42:51 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:52 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 26 07:42:52 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 26 07:42:53 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 26 07:42:53 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 26 07:42:53 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:53 np0005536586 python3.9[123078]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:42:54 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 26 07:42:54 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 26 07:42:54 np0005536586 python3.9[123156]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:54 np0005536586 python3.9[123308]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:55 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 26 07:42:55 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 26 07:42:55 np0005536586 python3.9[123460]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:42:55 np0005536586 python3.9[123538]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:55 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:42:56 np0005536586 python3.9[123690]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 26 07:42:56 np0005536586 systemd[1]: Starting Time & Date Service...
Nov 26 07:42:56 np0005536586 systemd[1]: Started Time & Date Service.
Nov 26 07:42:56 np0005536586 python3.9[123846]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:57 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 26 07:42:57 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 26 07:42:57 np0005536586 python3.9[123998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:42:57 np0005536586 python3.9[124076]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:57 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:42:58 np0005536586 python3.9[124228]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:42:58 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 26 07:42:58 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 26 07:42:58 np0005536586 python3.9[124306]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.0iu31q6i recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:59 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 26 07:42:59 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 26 07:42:59 np0005536586 python3.9[124458]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:42:59 np0005536586 python3.9[124536]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:42:59 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.5 deep-scrub starts
Nov 26 07:42:59 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.5 deep-scrub ok
Nov 26 07:42:59 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:00 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 26 07:43:00 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 26 07:43:00 np0005536586 python3.9[124688]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:43:00 np0005536586 python3[124841]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 07:43:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:43:01 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 26 07:43:01 np0005536586 python3.9[124993]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:43:01 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 26 07:43:01 np0005536586 python3.9[125071]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:01 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:02 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 26 07:43:02 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 26 07:43:02 np0005536586 python3.9[125223]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:43:02 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.19 deep-scrub starts
Nov 26 07:43:02 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.19 deep-scrub ok
Nov 26 07:43:02 np0005536586 python3.9[125301]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:02 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 26 07:43:02 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 26 07:43:03 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 26 07:43:03 np0005536586 python3.9[125453]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:43:03 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 26 07:43:03 np0005536586 python3.9[125531]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:03 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 26 07:43:03 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 26 07:43:03 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:04 np0005536586 python3.9[125683]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:43:04 np0005536586 python3.9[125761]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:04 np0005536586 python3.9[125913]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:43:05 np0005536586 python3.9[125991]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:43:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:43:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:43:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:43:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:43:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:43:05 np0005536586 python3.9[126143]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:43:05 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:43:06 np0005536586 python3.9[126298]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:06 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 26 07:43:06 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 26 07:43:07 np0005536586 python3.9[126450]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:07 np0005536586 python3.9[126602]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:07 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:08 np0005536586 python3.9[126754]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 26 07:43:08 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 26 07:43:08 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 26 07:43:08 np0005536586 python3.9[126906]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 26 07:43:09 np0005536586 systemd-logind[777]: Session 39 logged out. Waiting for processes to exit.
Nov 26 07:43:09 np0005536586 systemd[1]: session-39.scope: Deactivated successfully.
Nov 26 07:43:09 np0005536586 systemd[1]: session-39.scope: Consumed 21.413s CPU time.
Nov 26 07:43:09 np0005536586 systemd-logind[777]: Removed session 39.
Nov 26 07:43:09 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 26 07:43:09 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 26 07:43:09 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:10 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 26 07:43:10 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 26 07:43:10 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:43:11 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:12 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 26 07:43:12 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 26 07:43:13 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 26 07:43:13 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 26 07:43:13 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 26 07:43:13 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 26 07:43:13 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:14 np0005536586 systemd-logind[777]: New session 40 of user zuul.
Nov 26 07:43:14 np0005536586 systemd[1]: Started Session 40 of User zuul.
Nov 26 07:43:14 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 26 07:43:14 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 26 07:43:14 np0005536586 python3.9[127086]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 26 07:43:15 np0005536586 python3.9[127238]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:43:15 np0005536586 python3.9[127392]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 26 07:43:15 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:43:16 np0005536586 python3.9[127544]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.fpfcw7t5 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:43:16 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.1a deep-scrub starts
Nov 26 07:43:16 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.1a deep-scrub ok
Nov 26 07:43:16 np0005536586 python3.9[127669]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.fpfcw7t5 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160995.9718838-44-176946474651612/.source.fpfcw7t5 _original_basename=.6p_s6o99 follow=False checksum=e21e3f9e4941376571ab17089734df5da9d861b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:17 np0005536586 python3.9[127821]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:43:17 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 26 07:43:17 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 26 07:43:17 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:18 np0005536586 python3.9[127973]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZE1dpxvL8OPz/VjvFsUTPfsDH6vQml5mdj02SrlFJXfQ252JoKh5fIbIe5jq+eMTBsdiCv9Uyd8xyCUarLeNlJLXFWeql+5MwT2PuY4qrfay7YgFarsvqVEneCieDB/KjZaqMenEf/yZJjvCZifypNg9Of1e8QgrIOrGdP8zeyVeSR6g7d477abOVM7jqxl1dgu5rM+rlTW4DHASE9s/qzG6qu1p1HB8ZEiKsXEtoLhomhrwcTSk94ELWY62pIn8cyapkDsX3TnUoIzQZE8wHuKD+UpY8fWfvFoKo+fdR3UnZmegzF7lylv9XeU/lSEgeDN/LggErCBVNDLBaUG54mPUhEXh3MLVnzgSeCs+DGrchncrg0mgqgKPeAPoZrH+WzFuvKCCsGBjrX8QhxkOy2Q43UXW4uIZlhuzPSsZEnqjd+oz98yWJanGeEkfPCs4nqf6Btd135JYpY2UQoryGnawaWQx/nbU9rePlzY7IbAuDaivVwT3RTKUEmoXfmis=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKuDB4s6WXjGK+4hbQXMcwUNsMga+M2cTnBcJkimQdRS#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK2PGuuGeSfke7nCSgI56m6cuyn45RHczvKouRcqVMRuIWRuDTGV0zknjmAVTtZjpkmBwAytv1rMLkBGlVHtizM=#012 create=True mode=0644 path=/tmp/ansible.fpfcw7t5 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:18 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 26 07:43:18 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 26 07:43:18 np0005536586 python3.9[128125]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.fpfcw7t5' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:43:19 np0005536586 python3.9[128279]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.fpfcw7t5 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:19 np0005536586 systemd[1]: session-40.scope: Deactivated successfully.
Nov 26 07:43:19 np0005536586 systemd[1]: session-40.scope: Consumed 4.041s CPU time.
Nov 26 07:43:19 np0005536586 systemd-logind[777]: Session 40 logged out. Waiting for processes to exit.
Nov 26 07:43:19 np0005536586 systemd-logind[777]: Removed session 40.
Nov 26 07:43:19 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:43:21 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 26 07:43:21 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 26 07:43:21 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:22 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1b deep-scrub starts
Nov 26 07:43:22 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1b deep-scrub ok
Nov 26 07:43:23 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 26 07:43:23 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 26 07:43:23 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 26 07:43:23 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 26 07:43:23 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:25 np0005536586 systemd-logind[777]: New session 41 of user zuul.
Nov 26 07:43:25 np0005536586 systemd[1]: Started Session 41 of User zuul.
Nov 26 07:43:25 np0005536586 python3.9[128458]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:43:25 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:25 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:43:26 np0005536586 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 26 07:43:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:43:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:43:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:43:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:43:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:43:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:43:26 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 6d67bc07-ad0b-42a8-974c-6a91946468d7 does not exist
Nov 26 07:43:26 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev f5b11044-1555-48f8-9e0b-2a5ee90e629a does not exist
Nov 26 07:43:26 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 485e85eb-d0e6-40ad-9c32-e7893136cc13 does not exist
Nov 26 07:43:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:43:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:43:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:43:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:43:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:43:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:43:26 np0005536586 python3.9[128730]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 26 07:43:27 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:43:27 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:43:27 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:43:27 np0005536586 podman[128994]: 2025-11-26 12:43:27.380217197 +0000 UTC m=+0.037285450 container create 392a8ada285259176f445484e8b9895b7aedd91b6ce4b3b02f63027b5df7e480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_moore, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:43:27 np0005536586 systemd[1]: Started libpod-conmon-392a8ada285259176f445484e8b9895b7aedd91b6ce4b3b02f63027b5df7e480.scope.
Nov 26 07:43:27 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:43:27 np0005536586 podman[128994]: 2025-11-26 12:43:27.457653805 +0000 UTC m=+0.114722079 container init 392a8ada285259176f445484e8b9895b7aedd91b6ce4b3b02f63027b5df7e480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:43:27 np0005536586 podman[128994]: 2025-11-26 12:43:27.364490423 +0000 UTC m=+0.021558697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:43:27 np0005536586 podman[128994]: 2025-11-26 12:43:27.463684031 +0000 UTC m=+0.120752284 container start 392a8ada285259176f445484e8b9895b7aedd91b6ce4b3b02f63027b5df7e480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:43:27 np0005536586 podman[128994]: 2025-11-26 12:43:27.464956033 +0000 UTC m=+0.122024286 container attach 392a8ada285259176f445484e8b9895b7aedd91b6ce4b3b02f63027b5df7e480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_moore, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:43:27 np0005536586 vigilant_moore[129042]: 167 167
Nov 26 07:43:27 np0005536586 systemd[1]: libpod-392a8ada285259176f445484e8b9895b7aedd91b6ce4b3b02f63027b5df7e480.scope: Deactivated successfully.
Nov 26 07:43:27 np0005536586 podman[128994]: 2025-11-26 12:43:27.471175636 +0000 UTC m=+0.128243890 container died 392a8ada285259176f445484e8b9895b7aedd91b6ce4b3b02f63027b5df7e480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 07:43:27 np0005536586 systemd[1]: var-lib-containers-storage-overlay-fa2cc98a9ea91c0952ec76633d30c3fc2e0b7fd3b1a492ab9a985cd58b7c650e-merged.mount: Deactivated successfully.
Nov 26 07:43:27 np0005536586 podman[128994]: 2025-11-26 12:43:27.493868292 +0000 UTC m=+0.150936546 container remove 392a8ada285259176f445484e8b9895b7aedd91b6ce4b3b02f63027b5df7e480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:43:27 np0005536586 systemd[1]: libpod-conmon-392a8ada285259176f445484e8b9895b7aedd91b6ce4b3b02f63027b5df7e480.scope: Deactivated successfully.
Nov 26 07:43:27 np0005536586 podman[129065]: 2025-11-26 12:43:27.630494818 +0000 UTC m=+0.038464537 container create 474a815f9128ca5b6e3568ea5d904e28145ad9b183925b92790d8c627c98418b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:43:27 np0005536586 systemd[1]: Started libpod-conmon-474a815f9128ca5b6e3568ea5d904e28145ad9b183925b92790d8c627c98418b.scope.
Nov 26 07:43:27 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:43:27 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37cfc1eed2a1e48d4d866c57282888f1a311b3820d75dc85f8c6b8d498c6863b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:43:27 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37cfc1eed2a1e48d4d866c57282888f1a311b3820d75dc85f8c6b8d498c6863b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:43:27 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37cfc1eed2a1e48d4d866c57282888f1a311b3820d75dc85f8c6b8d498c6863b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:43:27 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37cfc1eed2a1e48d4d866c57282888f1a311b3820d75dc85f8c6b8d498c6863b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:43:27 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37cfc1eed2a1e48d4d866c57282888f1a311b3820d75dc85f8c6b8d498c6863b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:43:27 np0005536586 podman[129065]: 2025-11-26 12:43:27.700548267 +0000 UTC m=+0.108517996 container init 474a815f9128ca5b6e3568ea5d904e28145ad9b183925b92790d8c627c98418b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 07:43:27 np0005536586 python3.9[129044]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:43:27 np0005536586 podman[129065]: 2025-11-26 12:43:27.708892942 +0000 UTC m=+0.116862661 container start 474a815f9128ca5b6e3568ea5d904e28145ad9b183925b92790d8c627c98418b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 07:43:27 np0005536586 podman[129065]: 2025-11-26 12:43:27.614504907 +0000 UTC m=+0.022474647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:43:27 np0005536586 podman[129065]: 2025-11-26 12:43:27.71050726 +0000 UTC m=+0.118476969 container attach 474a815f9128ca5b6e3568ea5d904e28145ad9b183925b92790d8c627c98418b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 07:43:27 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:28 np0005536586 python3.9[129235]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:43:28 np0005536586 festive_wu[129078]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:43:28 np0005536586 festive_wu[129078]: --> relative data size: 1.0
Nov 26 07:43:28 np0005536586 festive_wu[129078]: --> All data devices are unavailable
Nov 26 07:43:28 np0005536586 systemd[1]: libpod-474a815f9128ca5b6e3568ea5d904e28145ad9b183925b92790d8c627c98418b.scope: Deactivated successfully.
Nov 26 07:43:28 np0005536586 podman[129065]: 2025-11-26 12:43:28.660713655 +0000 UTC m=+1.068683375 container died 474a815f9128ca5b6e3568ea5d904e28145ad9b183925b92790d8c627c98418b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 07:43:28 np0005536586 systemd[1]: var-lib-containers-storage-overlay-37cfc1eed2a1e48d4d866c57282888f1a311b3820d75dc85f8c6b8d498c6863b-merged.mount: Deactivated successfully.
Nov 26 07:43:28 np0005536586 podman[129065]: 2025-11-26 12:43:28.701658284 +0000 UTC m=+1.109628003 container remove 474a815f9128ca5b6e3568ea5d904e28145ad9b183925b92790d8c627c98418b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:43:28 np0005536586 systemd[1]: libpod-conmon-474a815f9128ca5b6e3568ea5d904e28145ad9b183925b92790d8c627c98418b.scope: Deactivated successfully.
Nov 26 07:43:29 np0005536586 python3.9[129504]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:43:29 np0005536586 podman[129578]: 2025-11-26 12:43:29.275896879 +0000 UTC m=+0.041255787 container create b76808b9366f5a2ce9e11456feff06e849226795f0846db59df01a471566ac08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:43:29 np0005536586 systemd[1]: Started libpod-conmon-b76808b9366f5a2ce9e11456feff06e849226795f0846db59df01a471566ac08.scope.
Nov 26 07:43:29 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:43:29 np0005536586 podman[129578]: 2025-11-26 12:43:29.341858763 +0000 UTC m=+0.107217680 container init b76808b9366f5a2ce9e11456feff06e849226795f0846db59df01a471566ac08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:43:29 np0005536586 podman[129578]: 2025-11-26 12:43:29.348833713 +0000 UTC m=+0.114192620 container start b76808b9366f5a2ce9e11456feff06e849226795f0846db59df01a471566ac08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:43:29 np0005536586 podman[129578]: 2025-11-26 12:43:29.350313958 +0000 UTC m=+0.115672885 container attach b76808b9366f5a2ce9e11456feff06e849226795f0846db59df01a471566ac08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:43:29 np0005536586 kind_goodall[129622]: 167 167
Nov 26 07:43:29 np0005536586 systemd[1]: libpod-b76808b9366f5a2ce9e11456feff06e849226795f0846db59df01a471566ac08.scope: Deactivated successfully.
Nov 26 07:43:29 np0005536586 podman[129578]: 2025-11-26 12:43:29.355037335 +0000 UTC m=+0.120396372 container died b76808b9366f5a2ce9e11456feff06e849226795f0846db59df01a471566ac08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:43:29 np0005536586 podman[129578]: 2025-11-26 12:43:29.261561853 +0000 UTC m=+0.026920781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:43:29 np0005536586 systemd[1]: var-lib-containers-storage-overlay-a48673d1afcfc1f3dc805d8234ac4e4c5e4c7863405f18625e01766aa2192188-merged.mount: Deactivated successfully.
Nov 26 07:43:29 np0005536586 podman[129578]: 2025-11-26 12:43:29.375467019 +0000 UTC m=+0.140825927 container remove b76808b9366f5a2ce9e11456feff06e849226795f0846db59df01a471566ac08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:43:29 np0005536586 systemd[1]: libpod-conmon-b76808b9366f5a2ce9e11456feff06e849226795f0846db59df01a471566ac08.scope: Deactivated successfully.
Nov 26 07:43:29 np0005536586 podman[129665]: 2025-11-26 12:43:29.517469246 +0000 UTC m=+0.041759838 container create cbe1b60a6860655472e77914e8eaba09dccd038e3e28036ba7542d4704cf74a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 07:43:29 np0005536586 systemd[1]: Started libpod-conmon-cbe1b60a6860655472e77914e8eaba09dccd038e3e28036ba7542d4704cf74a4.scope.
Nov 26 07:43:29 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:43:29 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632d4d98ba05ebe79adc357b363abffd07accda6e710fb9069312887d1d3cf22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:43:29 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632d4d98ba05ebe79adc357b363abffd07accda6e710fb9069312887d1d3cf22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:43:29 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632d4d98ba05ebe79adc357b363abffd07accda6e710fb9069312887d1d3cf22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:43:29 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632d4d98ba05ebe79adc357b363abffd07accda6e710fb9069312887d1d3cf22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:43:29 np0005536586 podman[129665]: 2025-11-26 12:43:29.58391914 +0000 UTC m=+0.108209754 container init cbe1b60a6860655472e77914e8eaba09dccd038e3e28036ba7542d4704cf74a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hertz, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:43:29 np0005536586 podman[129665]: 2025-11-26 12:43:29.590879992 +0000 UTC m=+0.115170595 container start cbe1b60a6860655472e77914e8eaba09dccd038e3e28036ba7542d4704cf74a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 07:43:29 np0005536586 podman[129665]: 2025-11-26 12:43:29.592551649 +0000 UTC m=+0.116842242 container attach cbe1b60a6860655472e77914e8eaba09dccd038e3e28036ba7542d4704cf74a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hertz, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 07:43:29 np0005536586 podman[129665]: 2025-11-26 12:43:29.501887795 +0000 UTC m=+0.026178408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:43:29 np0005536586 python3.9[129758]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:29 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:30 np0005536586 systemd-logind[777]: Session 41 logged out. Waiting for processes to exit.
Nov 26 07:43:30 np0005536586 systemd[1]: session-41.scope: Deactivated successfully.
Nov 26 07:43:30 np0005536586 systemd[1]: session-41.scope: Consumed 3.398s CPU time.
Nov 26 07:43:30 np0005536586 systemd-logind[777]: Removed session 41.
Nov 26 07:43:30 np0005536586 elated_hertz[129701]: {
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:    "0": [
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:        {
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "devices": [
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "/dev/loop3"
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            ],
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "lv_name": "ceph_lv0",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "lv_size": "21470642176",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "name": "ceph_lv0",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "tags": {
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.cluster_name": "ceph",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.crush_device_class": "",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.encrypted": "0",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.osd_id": "0",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.type": "block",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.vdo": "0"
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            },
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "type": "block",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "vg_name": "ceph_vg0"
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:        }
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:    ],
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:    "1": [
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:        {
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "devices": [
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "/dev/loop4"
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            ],
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "lv_name": "ceph_lv1",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "lv_size": "21470642176",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "name": "ceph_lv1",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "tags": {
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.cluster_name": "ceph",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.crush_device_class": "",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.encrypted": "0",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.osd_id": "1",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.type": "block",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.vdo": "0"
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            },
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "type": "block",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "vg_name": "ceph_vg1"
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:        }
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:    ],
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:    "2": [
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:        {
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "devices": [
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "/dev/loop5"
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            ],
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "lv_name": "ceph_lv2",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "lv_size": "21470642176",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "name": "ceph_lv2",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "tags": {
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.cluster_name": "ceph",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.crush_device_class": "",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.encrypted": "0",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.osd_id": "2",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.type": "block",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:                "ceph.vdo": "0"
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            },
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "type": "block",
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:            "vg_name": "ceph_vg2"
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:        }
Nov 26 07:43:30 np0005536586 elated_hertz[129701]:    ]
Nov 26 07:43:30 np0005536586 elated_hertz[129701]: }
Nov 26 07:43:30 np0005536586 systemd[1]: libpod-cbe1b60a6860655472e77914e8eaba09dccd038e3e28036ba7542d4704cf74a4.scope: Deactivated successfully.
Nov 26 07:43:30 np0005536586 podman[129665]: 2025-11-26 12:43:30.29440672 +0000 UTC m=+0.818697332 container died cbe1b60a6860655472e77914e8eaba09dccd038e3e28036ba7542d4704cf74a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:43:30 np0005536586 systemd[1]: var-lib-containers-storage-overlay-632d4d98ba05ebe79adc357b363abffd07accda6e710fb9069312887d1d3cf22-merged.mount: Deactivated successfully.
Nov 26 07:43:30 np0005536586 podman[129665]: 2025-11-26 12:43:30.350406503 +0000 UTC m=+0.874697097 container remove cbe1b60a6860655472e77914e8eaba09dccd038e3e28036ba7542d4704cf74a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hertz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 07:43:30 np0005536586 systemd[1]: libpod-conmon-cbe1b60a6860655472e77914e8eaba09dccd038e3e28036ba7542d4704cf74a4.scope: Deactivated successfully.
Nov 26 07:43:30 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 26 07:43:30 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 26 07:43:30 np0005536586 podman[129928]: 2025-11-26 12:43:30.875229156 +0000 UTC m=+0.029483879 container create 7d949b9009a138dca3ae67a29befc53d37f69d06dfe794404b7a9c767d73aa95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 07:43:30 np0005536586 systemd[1]: Started libpod-conmon-7d949b9009a138dca3ae67a29befc53d37f69d06dfe794404b7a9c767d73aa95.scope.
Nov 26 07:43:30 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:43:30 np0005536586 podman[129928]: 2025-11-26 12:43:30.936482189 +0000 UTC m=+0.090736912 container init 7d949b9009a138dca3ae67a29befc53d37f69d06dfe794404b7a9c767d73aa95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:43:30 np0005536586 podman[129928]: 2025-11-26 12:43:30.943118519 +0000 UTC m=+0.097373243 container start 7d949b9009a138dca3ae67a29befc53d37f69d06dfe794404b7a9c767d73aa95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 07:43:30 np0005536586 podman[129928]: 2025-11-26 12:43:30.944916063 +0000 UTC m=+0.099170786 container attach 7d949b9009a138dca3ae67a29befc53d37f69d06dfe794404b7a9c767d73aa95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 26 07:43:30 np0005536586 boring_fermi[129942]: 167 167
Nov 26 07:43:30 np0005536586 systemd[1]: libpod-7d949b9009a138dca3ae67a29befc53d37f69d06dfe794404b7a9c767d73aa95.scope: Deactivated successfully.
Nov 26 07:43:30 np0005536586 podman[129928]: 2025-11-26 12:43:30.947749383 +0000 UTC m=+0.102004106 container died 7d949b9009a138dca3ae67a29befc53d37f69d06dfe794404b7a9c767d73aa95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:43:30 np0005536586 podman[129928]: 2025-11-26 12:43:30.863044611 +0000 UTC m=+0.017299344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:43:30 np0005536586 systemd[1]: var-lib-containers-storage-overlay-d5e8f63d3522eeb4fe993e71fdcf687824e183d8e280afbfb81c7ea572acd06a-merged.mount: Deactivated successfully.
Nov 26 07:43:30 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:43:30 np0005536586 podman[129928]: 2025-11-26 12:43:30.970460534 +0000 UTC m=+0.124715256 container remove 7d949b9009a138dca3ae67a29befc53d37f69d06dfe794404b7a9c767d73aa95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:43:30 np0005536586 systemd[1]: libpod-conmon-7d949b9009a138dca3ae67a29befc53d37f69d06dfe794404b7a9c767d73aa95.scope: Deactivated successfully.
Nov 26 07:43:31 np0005536586 podman[129964]: 2025-11-26 12:43:31.096052695 +0000 UTC m=+0.032948361 container create babcb8c03402b256f988bb85e9fd6c5f17738a8a354f9e7163c6363a44836a2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lewin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 07:43:31 np0005536586 systemd[1]: Started libpod-conmon-babcb8c03402b256f988bb85e9fd6c5f17738a8a354f9e7163c6363a44836a2d.scope.
Nov 26 07:43:31 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:43:31 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd03fdb7cc3da78a0d56168a54ad340d72d9b00a48003941fb1bd5f8ea322fdb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:43:31 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd03fdb7cc3da78a0d56168a54ad340d72d9b00a48003941fb1bd5f8ea322fdb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:43:31 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd03fdb7cc3da78a0d56168a54ad340d72d9b00a48003941fb1bd5f8ea322fdb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:43:31 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd03fdb7cc3da78a0d56168a54ad340d72d9b00a48003941fb1bd5f8ea322fdb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:43:31 np0005536586 podman[129964]: 2025-11-26 12:43:31.165288219 +0000 UTC m=+0.102183885 container init babcb8c03402b256f988bb85e9fd6c5f17738a8a354f9e7163c6363a44836a2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 07:43:31 np0005536586 podman[129964]: 2025-11-26 12:43:31.171341048 +0000 UTC m=+0.108236713 container start babcb8c03402b256f988bb85e9fd6c5f17738a8a354f9e7163c6363a44836a2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 07:43:31 np0005536586 podman[129964]: 2025-11-26 12:43:31.17520256 +0000 UTC m=+0.112098225 container attach babcb8c03402b256f988bb85e9fd6c5f17738a8a354f9e7163c6363a44836a2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lewin, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:43:31 np0005536586 podman[129964]: 2025-11-26 12:43:31.08300488 +0000 UTC m=+0.019900555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:43:31 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 26 07:43:31 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 26 07:43:31 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:31 np0005536586 musing_lewin[129977]: {
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:        "osd_id": 1,
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:        "type": "bluestore"
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:    },
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:        "osd_id": 2,
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:        "type": "bluestore"
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:    },
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:        "osd_id": 0,
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:        "type": "bluestore"
Nov 26 07:43:31 np0005536586 musing_lewin[129977]:    }
Nov 26 07:43:31 np0005536586 musing_lewin[129977]: }
Nov 26 07:43:32 np0005536586 systemd[1]: libpod-babcb8c03402b256f988bb85e9fd6c5f17738a8a354f9e7163c6363a44836a2d.scope: Deactivated successfully.
Nov 26 07:43:32 np0005536586 podman[129964]: 2025-11-26 12:43:32.007448182 +0000 UTC m=+0.944343867 container died babcb8c03402b256f988bb85e9fd6c5f17738a8a354f9e7163c6363a44836a2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:43:32 np0005536586 systemd[1]: var-lib-containers-storage-overlay-cd03fdb7cc3da78a0d56168a54ad340d72d9b00a48003941fb1bd5f8ea322fdb-merged.mount: Deactivated successfully.
Nov 26 07:43:32 np0005536586 podman[129964]: 2025-11-26 12:43:32.050614203 +0000 UTC m=+0.987509859 container remove babcb8c03402b256f988bb85e9fd6c5f17738a8a354f9e7163c6363a44836a2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lewin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:43:32 np0005536586 systemd[1]: libpod-conmon-babcb8c03402b256f988bb85e9fd6c5f17738a8a354f9e7163c6363a44836a2d.scope: Deactivated successfully.
Nov 26 07:43:32 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:43:32 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:43:32 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:43:32 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:43:32 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 88dbb656-5756-4fff-b2f7-51e3636b0e2f does not exist
Nov 26 07:43:32 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 7e66d59c-1045-4528-959c-c6ffa033dcbf does not exist
Nov 26 07:43:33 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:43:33 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:43:33 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1d deep-scrub starts
Nov 26 07:43:33 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1d deep-scrub ok
Nov 26 07:43:33 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:34 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 26 07:43:34 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 26 07:43:35 np0005536586 systemd-logind[777]: New session 42 of user zuul.
Nov 26 07:43:35 np0005536586 systemd[1]: Started Session 42 of User zuul.
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:43:35
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'vms', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'backups']
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:43:35 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:43:36 np0005536586 python3.9[130224]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:43:36 np0005536586 python3.9[130380]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:43:37 np0005536586 python3.9[130464]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 07:43:37 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:39 np0005536586 python3.9[130615]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:43:39 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 26 07:43:39 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 26 07:43:39 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 26 07:43:39 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 26 07:43:39 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:40 np0005536586 python3.9[130766]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 07:43:40 np0005536586 python3.9[130916]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:43:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:43:41 np0005536586 python3.9[131066]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:43:41 np0005536586 systemd[1]: session-42.scope: Deactivated successfully.
Nov 26 07:43:41 np0005536586 systemd[1]: session-42.scope: Consumed 4.556s CPU time.
Nov 26 07:43:41 np0005536586 systemd-logind[777]: Session 42 logged out. Waiting for processes to exit.
Nov 26 07:43:41 np0005536586 systemd-logind[777]: Removed session 42.
Nov 26 07:43:41 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:42 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 26 07:43:42 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 26 07:43:43 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:44 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 26 07:43:44 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 07:43:45 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 26 07:43:45 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 26 07:43:45 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:43:46 np0005536586 systemd-logind[777]: New session 43 of user zuul.
Nov 26 07:43:46 np0005536586 systemd[1]: Started Session 43 of User zuul.
Nov 26 07:43:47 np0005536586 python3.9[131244]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:43:47 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 26 07:43:47 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 26 07:43:47 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:48 np0005536586 python3.9[131400]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:43:48 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 26 07:43:48 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 26 07:43:48 np0005536586 python3.9[131552]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.137930) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161029137998, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7253, "num_deletes": 251, "total_data_size": 9480536, "memory_usage": 9717632, "flush_reason": "Manual Compaction"}
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161029152543, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7583249, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 134, "largest_seqno": 7384, "table_properties": {"data_size": 7556632, "index_size": 17222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 77306, "raw_average_key_size": 23, "raw_value_size": 7493312, "raw_average_value_size": 2264, "num_data_blocks": 756, "num_entries": 3309, "num_filter_entries": 3309, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160615, "oldest_key_time": 1764160615, "file_creation_time": 1764161029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 14662 microseconds, and 12090 cpu microseconds.
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.152593) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7583249 bytes OK
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.152618) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.152962) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.152978) EVENT_LOG_v1 {"time_micros": 1764161029152974, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.153003) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9448810, prev total WAL file size 9448810, number of live WAL files 2.
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.154565) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7405KB) 13(52KB) 8(1944B)]
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161029154662, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7639159, "oldest_snapshot_seqno": -1}
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3123 keys, 7595185 bytes, temperature: kUnknown
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161029170038, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7595185, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7569038, "index_size": 17269, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 75337, "raw_average_key_size": 24, "raw_value_size": 7507317, "raw_average_value_size": 2403, "num_data_blocks": 760, "num_entries": 3123, "num_filter_entries": 3123, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160613, "oldest_key_time": 0, "file_creation_time": 1764161029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.170180) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7595185 bytes
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.170507) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 495.7 rd, 492.9 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.3, 0.0 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3412, records dropped: 289 output_compression: NoCompression
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.170522) EVENT_LOG_v1 {"time_micros": 1764161029170514, "job": 4, "event": "compaction_finished", "compaction_time_micros": 15410, "compaction_time_cpu_micros": 12857, "output_level": 6, "num_output_files": 1, "total_output_size": 7595185, "num_input_records": 3412, "num_output_records": 3123, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161029171369, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161029171425, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161029171455, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 26 07:43:49 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.154495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:43:49 np0005536586 python3.9[131705]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:43:49 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:50 np0005536586 python3.9[131828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161029.0472236-65-152336084713964/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=341ca2fc409c9190c99d327bf21634777e517827 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:50 np0005536586 python3.9[131980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:43:50 np0005536586 python3.9[132103]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161030.1742167-65-214120028910979/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=69e5ba039761a8ef5a94c218b10e6621452398f8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:43:51 np0005536586 python3.9[132255]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:43:51 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 26 07:43:51 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 26 07:43:51 np0005536586 python3.9[132378]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161031.0117486-65-29109148638256/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=e107e0dc1f5999b737bd6fda13616c3460e5af4c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:51 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:52 np0005536586 python3.9[132530]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:43:52 np0005536586 python3.9[132682]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:43:53 np0005536586 python3.9[132834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:43:53 np0005536586 python3.9[132957]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161032.8567283-124-154011641634611/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=2d7354a1447831a9eecceecd082a82ae74e08486 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:53 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:54 np0005536586 python3.9[133109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:43:54 np0005536586 python3.9[133232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161033.6856782-124-249558399270437/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a78596f424093bc5574244d010f70c6e099d950f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:54 np0005536586 python3.9[133384]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:43:55 np0005536586 python3.9[133507]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161034.6498072-124-214167972970372/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=dec74b950237e7b2512d888c01e1656030773611 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:55 np0005536586 python3.9[133659]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:43:55 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 26 07:43:55 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:43:55 np0005536586 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 26 07:43:56 np0005536586 python3.9[133811]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:43:56 np0005536586 python3.9[133963]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:43:57 np0005536586 python3.9[134086]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161036.4187741-183-261956650564171/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=03cf771e66ad07a5c7a4525fd0c7443e989e3317 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:57 np0005536586 python3.9[134238]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:43:57 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 26 07:43:57 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 26 07:43:57 np0005536586 python3.9[134361]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161037.2417974-183-149220296729121/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a78596f424093bc5574244d010f70c6e099d950f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:57 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:43:58 np0005536586 python3.9[134513]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:43:58 np0005536586 python3.9[134636]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161038.0552351-183-180112147401526/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=abcccd8f7b37b72b4b6d0d27fa061a87cf2f1ea7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:43:59 np0005536586 python3.9[134788]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:43:59 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:00 np0005536586 python3.9[134940]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:00 np0005536586 python3.9[135063]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161039.880723-251-177241378506910/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7c9073e58b305b24b8ebef88eac378fe26a8dfa0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:44:01 np0005536586 python3.9[135215]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:44:01 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 26 07:44:01 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 26 07:44:01 np0005536586 python3.9[135367]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:01 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:02 np0005536586 python3.9[135490]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161041.3289576-275-23312911899851/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7c9073e58b305b24b8ebef88eac378fe26a8dfa0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:02 np0005536586 python3.9[135642]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:44:03 np0005536586 python3.9[135794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:03 np0005536586 python3.9[135917]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161042.7945852-299-81867299563036/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7c9073e58b305b24b8ebef88eac378fe26a8dfa0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:03 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:04 np0005536586 python3.9[136069]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:44:04 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 26 07:44:04 np0005536586 python3.9[136221]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:04 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 26 07:44:05 np0005536586 python3.9[136344]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161044.253786-323-119407292678162/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7c9073e58b305b24b8ebef88eac378fe26a8dfa0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:05 np0005536586 python3.9[136496]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:44:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:44:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:44:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:44:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:44:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:44:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:44:05 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:44:06 np0005536586 python3.9[136648]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:06 np0005536586 python3.9[136771]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161045.7734694-347-261900215496543/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7c9073e58b305b24b8ebef88eac378fe26a8dfa0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:07 np0005536586 python3.9[136923]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:44:07 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 26 07:44:07 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 26 07:44:07 np0005536586 python3.9[137075]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:07 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:08 np0005536586 python3.9[137198]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161047.2883492-371-149110817839969/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7c9073e58b305b24b8ebef88eac378fe26a8dfa0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:08 np0005536586 systemd[1]: session-43.scope: Deactivated successfully.
Nov 26 07:44:08 np0005536586 systemd[1]: session-43.scope: Consumed 17.345s CPU time.
Nov 26 07:44:08 np0005536586 systemd-logind[777]: Session 43 logged out. Waiting for processes to exit.
Nov 26 07:44:08 np0005536586 systemd-logind[777]: Removed session 43.
Nov 26 07:44:09 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:10 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:44:11 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 26 07:44:11 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 26 07:44:11 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:13 np0005536586 systemd-logind[777]: New session 44 of user zuul.
Nov 26 07:44:13 np0005536586 systemd[1]: Started Session 44 of User zuul.
Nov 26 07:44:13 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:14 np0005536586 python3.9[137378]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:14 np0005536586 python3.9[137530]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:15 np0005536586 python3.9[137653]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161054.3973942-34-233556076058310/.source.conf _original_basename=ceph.conf follow=False checksum=547d467ffd9717c8e35ff6810ca30a44e880cfdb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:15 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 26 07:44:15 np0005536586 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 26 07:44:15 np0005536586 python3.9[137805]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:15 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:44:16 np0005536586 python3.9[137928]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161055.5687437-34-19694786167669/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=c49cad1c73fc246f2066e2f44ed85f4bdde7800e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:16 np0005536586 systemd-logind[777]: Session 44 logged out. Waiting for processes to exit.
Nov 26 07:44:16 np0005536586 systemd[1]: session-44.scope: Deactivated successfully.
Nov 26 07:44:16 np0005536586 systemd[1]: session-44.scope: Consumed 2.149s CPU time.
Nov 26 07:44:16 np0005536586 systemd-logind[777]: Removed session 44.
Nov 26 07:44:17 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:19 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:44:21 np0005536586 systemd-logind[777]: New session 45 of user zuul.
Nov 26 07:44:21 np0005536586 systemd[1]: Started Session 45 of User zuul.
Nov 26 07:44:21 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:22 np0005536586 python3.9[138106]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:44:23 np0005536586 python3.9[138262]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:44:23 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:23 np0005536586 python3.9[138414]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:44:24 np0005536586 python3.9[138564]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:44:25 np0005536586 python3.9[138716]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 26 07:44:25 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:25 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:44:26 np0005536586 dbus-broker-launch[767]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 26 07:44:26 np0005536586 python3.9[138872]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:44:27 np0005536586 python3.9[138956]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:44:27 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:29 np0005536586 python3.9[139109]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 07:44:29 np0005536586 python3[139264]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 26 07:44:29 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:30 np0005536586 python3.9[139416]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:30 np0005536586 python3.9[139568]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:30 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:44:31 np0005536586 python3.9[139646]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:31 np0005536586 python3.9[139798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:31 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:32 np0005536586 python3.9[139876]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xdtpfoiu recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:32 np0005536586 python3.9[140137]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:32 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:44:32 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:44:32 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:44:32 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:44:32 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:44:32 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:44:32 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 1b2616eb-e4b3-4d90-ad3b-78315eac5626 does not exist
Nov 26 07:44:32 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev f83aa3a3-3339-4cac-ae8a-75c051ea3155 does not exist
Nov 26 07:44:32 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 34053152-2fec-4ec2-a7af-1f7e46863e57 does not exist
Nov 26 07:44:32 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:44:32 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:44:32 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:44:32 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:44:32 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:44:32 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:44:33 np0005536586 python3.9[140284]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:33 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:44:33 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:44:33 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:44:33 np0005536586 podman[140404]: 2025-11-26 12:44:33.3305205 +0000 UTC m=+0.042124011 container create aceddc5adf862aef38cd2be3442a7f906476eaa1eaebeb9604b6e71f0b7e94df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:44:33 np0005536586 systemd[1]: Started libpod-conmon-aceddc5adf862aef38cd2be3442a7f906476eaa1eaebeb9604b6e71f0b7e94df.scope.
Nov 26 07:44:33 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:44:33 np0005536586 podman[140404]: 2025-11-26 12:44:33.405502204 +0000 UTC m=+0.117105725 container init aceddc5adf862aef38cd2be3442a7f906476eaa1eaebeb9604b6e71f0b7e94df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:44:33 np0005536586 podman[140404]: 2025-11-26 12:44:33.314316846 +0000 UTC m=+0.025920377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:44:33 np0005536586 podman[140404]: 2025-11-26 12:44:33.411591707 +0000 UTC m=+0.123195219 container start aceddc5adf862aef38cd2be3442a7f906476eaa1eaebeb9604b6e71f0b7e94df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 07:44:33 np0005536586 podman[140404]: 2025-11-26 12:44:33.414298354 +0000 UTC m=+0.125901875 container attach aceddc5adf862aef38cd2be3442a7f906476eaa1eaebeb9604b6e71f0b7e94df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 07:44:33 np0005536586 determined_galileo[140456]: 167 167
Nov 26 07:44:33 np0005536586 systemd[1]: libpod-aceddc5adf862aef38cd2be3442a7f906476eaa1eaebeb9604b6e71f0b7e94df.scope: Deactivated successfully.
Nov 26 07:44:33 np0005536586 podman[140404]: 2025-11-26 12:44:33.418724325 +0000 UTC m=+0.130327836 container died aceddc5adf862aef38cd2be3442a7f906476eaa1eaebeb9604b6e71f0b7e94df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_galileo, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:44:33 np0005536586 systemd[1]: var-lib-containers-storage-overlay-48f0a113ce40f0952c9701e717b2f338aa0920b6bba575ec3a60b95e2b9288e8-merged.mount: Deactivated successfully.
Nov 26 07:44:33 np0005536586 podman[140404]: 2025-11-26 12:44:33.446167507 +0000 UTC m=+0.157771019 container remove aceddc5adf862aef38cd2be3442a7f906476eaa1eaebeb9604b6e71f0b7e94df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_galileo, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:44:33 np0005536586 systemd[1]: libpod-conmon-aceddc5adf862aef38cd2be3442a7f906476eaa1eaebeb9604b6e71f0b7e94df.scope: Deactivated successfully.
Nov 26 07:44:33 np0005536586 podman[140502]: 2025-11-26 12:44:33.595026558 +0000 UTC m=+0.040918571 container create 4349a363c823d0eaec6e7b68c7af9aff0f19981501278cc70f8720a74ab62043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_stonebraker, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:44:33 np0005536586 systemd[1]: Started libpod-conmon-4349a363c823d0eaec6e7b68c7af9aff0f19981501278cc70f8720a74ab62043.scope.
Nov 26 07:44:33 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:44:33 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28df757908398eda10e81d5db73cc90997c7dd5e7b93105b61d0ab2237e548b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:44:33 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28df757908398eda10e81d5db73cc90997c7dd5e7b93105b61d0ab2237e548b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:44:33 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28df757908398eda10e81d5db73cc90997c7dd5e7b93105b61d0ab2237e548b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:44:33 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28df757908398eda10e81d5db73cc90997c7dd5e7b93105b61d0ab2237e548b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:44:33 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28df757908398eda10e81d5db73cc90997c7dd5e7b93105b61d0ab2237e548b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:44:33 np0005536586 podman[140502]: 2025-11-26 12:44:33.580213302 +0000 UTC m=+0.026105315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:44:33 np0005536586 podman[140502]: 2025-11-26 12:44:33.677745398 +0000 UTC m=+0.123637401 container init 4349a363c823d0eaec6e7b68c7af9aff0f19981501278cc70f8720a74ab62043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_stonebraker, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 07:44:33 np0005536586 podman[140502]: 2025-11-26 12:44:33.687705298 +0000 UTC m=+0.133597301 container start 4349a363c823d0eaec6e7b68c7af9aff0f19981501278cc70f8720a74ab62043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 07:44:33 np0005536586 podman[140502]: 2025-11-26 12:44:33.689060098 +0000 UTC m=+0.134952111 container attach 4349a363c823d0eaec6e7b68c7af9aff0f19981501278cc70f8720a74ab62043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 07:44:33 np0005536586 python3.9[140570]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:44:33 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:34 np0005536586 python3[140733]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 07:44:34 np0005536586 youthful_stonebraker[140555]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:44:34 np0005536586 youthful_stonebraker[140555]: --> relative data size: 1.0
Nov 26 07:44:34 np0005536586 youthful_stonebraker[140555]: --> All data devices are unavailable
Nov 26 07:44:34 np0005536586 systemd[1]: libpod-4349a363c823d0eaec6e7b68c7af9aff0f19981501278cc70f8720a74ab62043.scope: Deactivated successfully.
Nov 26 07:44:34 np0005536586 podman[140502]: 2025-11-26 12:44:34.616439008 +0000 UTC m=+1.062331012 container died 4349a363c823d0eaec6e7b68c7af9aff0f19981501278cc70f8720a74ab62043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 07:44:34 np0005536586 systemd[1]: var-lib-containers-storage-overlay-28df757908398eda10e81d5db73cc90997c7dd5e7b93105b61d0ab2237e548b7-merged.mount: Deactivated successfully.
Nov 26 07:44:34 np0005536586 podman[140502]: 2025-11-26 12:44:34.64951796 +0000 UTC m=+1.095409963 container remove 4349a363c823d0eaec6e7b68c7af9aff0f19981501278cc70f8720a74ab62043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_stonebraker, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 07:44:34 np0005536586 systemd[1]: libpod-conmon-4349a363c823d0eaec6e7b68c7af9aff0f19981501278cc70f8720a74ab62043.scope: Deactivated successfully.
Nov 26 07:44:35 np0005536586 python3.9[141012]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:35 np0005536586 podman[141051]: 2025-11-26 12:44:35.221803649 +0000 UTC m=+0.039644421 container create 081e2c4cc99bd39d0191a245fb5231d20733361092459a5236a345fb3d11fffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackburn, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:44:35 np0005536586 systemd[1]: Started libpod-conmon-081e2c4cc99bd39d0191a245fb5231d20733361092459a5236a345fb3d11fffb.scope.
Nov 26 07:44:35 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:44:35 np0005536586 podman[141051]: 2025-11-26 12:44:35.287043585 +0000 UTC m=+0.104884377 container init 081e2c4cc99bd39d0191a245fb5231d20733361092459a5236a345fb3d11fffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackburn, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:44:35 np0005536586 podman[141051]: 2025-11-26 12:44:35.294570003 +0000 UTC m=+0.112410776 container start 081e2c4cc99bd39d0191a245fb5231d20733361092459a5236a345fb3d11fffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackburn, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 07:44:35 np0005536586 podman[141051]: 2025-11-26 12:44:35.296387685 +0000 UTC m=+0.114228458 container attach 081e2c4cc99bd39d0191a245fb5231d20733361092459a5236a345fb3d11fffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:44:35 np0005536586 blissful_blackburn[141108]: 167 167
Nov 26 07:44:35 np0005536586 systemd[1]: libpod-081e2c4cc99bd39d0191a245fb5231d20733361092459a5236a345fb3d11fffb.scope: Deactivated successfully.
Nov 26 07:44:35 np0005536586 podman[141051]: 2025-11-26 12:44:35.299148081 +0000 UTC m=+0.116988854 container died 081e2c4cc99bd39d0191a245fb5231d20733361092459a5236a345fb3d11fffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackburn, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:44:35 np0005536586 podman[141051]: 2025-11-26 12:44:35.206074229 +0000 UTC m=+0.023915021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:44:35 np0005536586 systemd[1]: var-lib-containers-storage-overlay-49b32a58a952e28057966d9517e32e8fc1f86e85acdb6a464e60dfa3a01f46b8-merged.mount: Deactivated successfully.
Nov 26 07:44:35 np0005536586 podman[141051]: 2025-11-26 12:44:35.319859662 +0000 UTC m=+0.137700434 container remove 081e2c4cc99bd39d0191a245fb5231d20733361092459a5236a345fb3d11fffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackburn, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 07:44:35 np0005536586 systemd[1]: libpod-conmon-081e2c4cc99bd39d0191a245fb5231d20733361092459a5236a345fb3d11fffb.scope: Deactivated successfully.
Nov 26 07:44:35 np0005536586 podman[141130]: 2025-11-26 12:44:35.469067077 +0000 UTC m=+0.042041785 container create c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:44:35 np0005536586 systemd[1]: Started libpod-conmon-c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d.scope.
Nov 26 07:44:35 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:44:35 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfd016ff6f1a1d0e0ccc1144595b585b72345d5c0ca5f9b053e6d2bb742b4f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:44:35 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfd016ff6f1a1d0e0ccc1144595b585b72345d5c0ca5f9b053e6d2bb742b4f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:44:35 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfd016ff6f1a1d0e0ccc1144595b585b72345d5c0ca5f9b053e6d2bb742b4f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:44:35 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfd016ff6f1a1d0e0ccc1144595b585b72345d5c0ca5f9b053e6d2bb742b4f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:44:35 np0005536586 podman[141130]: 2025-11-26 12:44:35.545669344 +0000 UTC m=+0.118644051 container init c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:44:35 np0005536586 podman[141130]: 2025-11-26 12:44:35.453288635 +0000 UTC m=+0.026263362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:44:35 np0005536586 podman[141130]: 2025-11-26 12:44:35.551508716 +0000 UTC m=+0.124483423 container start c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 07:44:35 np0005536586 podman[141130]: 2025-11-26 12:44:35.552632091 +0000 UTC m=+0.125606798 container attach c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 07:44:35 np0005536586 python3.9[141223]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161074.709371-157-115719914962467/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:44:35
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.mgr']
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:44:35 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]: {
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:    "0": [
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:        {
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "devices": [
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "/dev/loop3"
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            ],
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "lv_name": "ceph_lv0",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "lv_size": "21470642176",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "name": "ceph_lv0",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "tags": {
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.cluster_name": "ceph",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.crush_device_class": "",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.encrypted": "0",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.osd_id": "0",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.type": "block",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.vdo": "0"
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            },
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "type": "block",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "vg_name": "ceph_vg0"
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:        }
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:    ],
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:    "1": [
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:        {
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "devices": [
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "/dev/loop4"
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            ],
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "lv_name": "ceph_lv1",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "lv_size": "21470642176",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "name": "ceph_lv1",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "tags": {
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.cluster_name": "ceph",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.crush_device_class": "",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.encrypted": "0",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.osd_id": "1",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.type": "block",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.vdo": "0"
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            },
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "type": "block",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "vg_name": "ceph_vg1"
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:        }
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:    ],
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:    "2": [
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:        {
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "devices": [
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "/dev/loop5"
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            ],
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "lv_name": "ceph_lv2",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "lv_size": "21470642176",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "name": "ceph_lv2",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "tags": {
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.cluster_name": "ceph",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.crush_device_class": "",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.encrypted": "0",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.osd_id": "2",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.type": "block",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:                "ceph.vdo": "0"
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            },
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "type": "block",
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:            "vg_name": "ceph_vg2"
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:        }
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]:    ]
Nov 26 07:44:36 np0005536586 pensive_elgamal[141167]: }
Nov 26 07:44:36 np0005536586 systemd[1]: libpod-c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d.scope: Deactivated successfully.
Nov 26 07:44:36 np0005536586 conmon[141167]: conmon c25be37af7ba7b2aaf0a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d.scope/container/memory.events
Nov 26 07:44:36 np0005536586 podman[141130]: 2025-11-26 12:44:36.237465705 +0000 UTC m=+0.810440422 container died c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:44:36 np0005536586 systemd[1]: var-lib-containers-storage-overlay-9bfd016ff6f1a1d0e0ccc1144595b585b72345d5c0ca5f9b053e6d2bb742b4f0-merged.mount: Deactivated successfully.
Nov 26 07:44:36 np0005536586 podman[141130]: 2025-11-26 12:44:36.283414314 +0000 UTC m=+0.856389020 container remove c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:44:36 np0005536586 systemd[1]: libpod-conmon-c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d.scope: Deactivated successfully.
Nov 26 07:44:36 np0005536586 python3.9[141379]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:36 np0005536586 python3.9[141622]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161075.9246247-172-155368876192524/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:36 np0005536586 podman[141646]: 2025-11-26 12:44:36.825881048 +0000 UTC m=+0.035404882 container create 1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noyce, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:44:36 np0005536586 systemd[1]: Started libpod-conmon-1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a.scope.
Nov 26 07:44:36 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:44:36 np0005536586 podman[141646]: 2025-11-26 12:44:36.885696322 +0000 UTC m=+0.095220165 container init 1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 07:44:36 np0005536586 podman[141646]: 2025-11-26 12:44:36.893414932 +0000 UTC m=+0.102938765 container start 1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noyce, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:44:36 np0005536586 podman[141646]: 2025-11-26 12:44:36.894745768 +0000 UTC m=+0.104269601 container attach 1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noyce, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 07:44:36 np0005536586 serene_noyce[141667]: 167 167
Nov 26 07:44:36 np0005536586 systemd[1]: libpod-1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a.scope: Deactivated successfully.
Nov 26 07:44:36 np0005536586 conmon[141667]: conmon 1d7b760c8916e971f387 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a.scope/container/memory.events
Nov 26 07:44:36 np0005536586 podman[141646]: 2025-11-26 12:44:36.899335257 +0000 UTC m=+0.108859090 container died 1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noyce, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:44:36 np0005536586 podman[141646]: 2025-11-26 12:44:36.811734016 +0000 UTC m=+0.021257869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:44:36 np0005536586 systemd[1]: var-lib-containers-storage-overlay-d22e342f50bcd46fb5fe3a750e615239bc7b05fdd8b078dcbbe30ab2618a3d2b-merged.mount: Deactivated successfully.
Nov 26 07:44:36 np0005536586 podman[141646]: 2025-11-26 12:44:36.927574929 +0000 UTC m=+0.137098763 container remove 1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 07:44:36 np0005536586 systemd[1]: libpod-conmon-1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a.scope: Deactivated successfully.
Nov 26 07:44:37 np0005536586 podman[141758]: 2025-11-26 12:44:37.069490468 +0000 UTC m=+0.037611446 container create 77e88b7d2e8578bff8c2a066159e624ac39bc5aa0b21200644a0551686521ce9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 07:44:37 np0005536586 systemd[1]: Started libpod-conmon-77e88b7d2e8578bff8c2a066159e624ac39bc5aa0b21200644a0551686521ce9.scope.
Nov 26 07:44:37 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:44:37 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/537d3da9f82ea2483dc66fb2752c7e76efa9d2b15e804ad297f287694b313bff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:44:37 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/537d3da9f82ea2483dc66fb2752c7e76efa9d2b15e804ad297f287694b313bff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:44:37 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/537d3da9f82ea2483dc66fb2752c7e76efa9d2b15e804ad297f287694b313bff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:44:37 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/537d3da9f82ea2483dc66fb2752c7e76efa9d2b15e804ad297f287694b313bff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:44:37 np0005536586 podman[141758]: 2025-11-26 12:44:37.137999107 +0000 UTC m=+0.106120105 container init 77e88b7d2e8578bff8c2a066159e624ac39bc5aa0b21200644a0551686521ce9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 07:44:37 np0005536586 podman[141758]: 2025-11-26 12:44:37.144165536 +0000 UTC m=+0.112286513 container start 77e88b7d2e8578bff8c2a066159e624ac39bc5aa0b21200644a0551686521ce9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:44:37 np0005536586 podman[141758]: 2025-11-26 12:44:37.145493606 +0000 UTC m=+0.113614584 container attach 77e88b7d2e8578bff8c2a066159e624ac39bc5aa0b21200644a0551686521ce9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 07:44:37 np0005536586 podman[141758]: 2025-11-26 12:44:37.054067524 +0000 UTC m=+0.022188522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:44:37 np0005536586 python3.9[141851]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:37 np0005536586 python3.9[141976]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161076.9696996-187-239240890485320/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]: {
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:        "osd_id": 1,
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:        "type": "bluestore"
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:    },
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:        "osd_id": 2,
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:        "type": "bluestore"
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:    },
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:        "osd_id": 0,
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:        "type": "bluestore"
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]:    }
Nov 26 07:44:37 np0005536586 exciting_hertz[141794]: }
Nov 26 07:44:37 np0005536586 systemd[1]: libpod-77e88b7d2e8578bff8c2a066159e624ac39bc5aa0b21200644a0551686521ce9.scope: Deactivated successfully.
Nov 26 07:44:37 np0005536586 podman[141758]: 2025-11-26 12:44:37.967593134 +0000 UTC m=+0.935714112 container died 77e88b7d2e8578bff8c2a066159e624ac39bc5aa0b21200644a0551686521ce9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hertz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:44:37 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:37 np0005536586 systemd[1]: var-lib-containers-storage-overlay-537d3da9f82ea2483dc66fb2752c7e76efa9d2b15e804ad297f287694b313bff-merged.mount: Deactivated successfully.
Nov 26 07:44:38 np0005536586 podman[141758]: 2025-11-26 12:44:38.012151589 +0000 UTC m=+0.980272567 container remove 77e88b7d2e8578bff8c2a066159e624ac39bc5aa0b21200644a0551686521ce9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 07:44:38 np0005536586 systemd[1]: libpod-conmon-77e88b7d2e8578bff8c2a066159e624ac39bc5aa0b21200644a0551686521ce9.scope: Deactivated successfully.
Nov 26 07:44:38 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:44:38 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:44:38 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:44:38 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:44:38 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 9e4c8dc7-61b1-48ff-a6fd-6b7c4bc46899 does not exist
Nov 26 07:44:38 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev d445658a-c4be-45e5-b9f5-bb8060dea5a8 does not exist
Nov 26 07:44:38 np0005536586 python3.9[142216]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:38 np0005536586 python3.9[142341]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161077.9623969-202-35931498877689/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:39 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:44:39 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:44:39 np0005536586 python3.9[142493]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:39 np0005536586 python3.9[142618]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161078.931722-217-130550727170573/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:39 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:40 np0005536586 python3.9[142770]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:40 np0005536586 python3.9[142922]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:44:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:44:41 np0005536586 python3.9[143077]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:41 np0005536586 python3.9[143229]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:44:41 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:42 np0005536586 python3.9[143382]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:44:42 np0005536586 systemd[1]: session-17.scope: Deactivated successfully.
Nov 26 07:44:42 np0005536586 systemd[1]: session-17.scope: Consumed 1min 1.230s CPU time.
Nov 26 07:44:42 np0005536586 systemd-logind[777]: Session 17 logged out. Waiting for processes to exit.
Nov 26 07:44:42 np0005536586 systemd-logind[777]: Removed session 17.
Nov 26 07:44:42 np0005536586 python3.9[143536]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:44:43 np0005536586 python3.9[143691]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:43 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:44 np0005536586 python3.9[143841]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:44:44 np0005536586 python3.9[143994]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:44:44 np0005536586 ovs-vsctl[143995]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 07:44:45 np0005536586 python3.9[144147]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:44:45 np0005536586 python3.9[144302]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:44:45 np0005536586 ovs-vsctl[144303]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 26 07:44:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:44:45 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:46 np0005536586 python3.9[144453]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:44:46 np0005536586 python3.9[144607]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:44:47 np0005536586 python3.9[144759]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:47 np0005536586 python3.9[144837]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:44:47 np0005536586 python3.9[144989]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:47 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:48 np0005536586 python3.9[145067]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:44:48 np0005536586 python3.9[145219]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:48 np0005536586 python3.9[145371]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:49 np0005536586 python3.9[145449]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:49 np0005536586 python3.9[145601]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:49 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:50 np0005536586 python3.9[145679]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:50 np0005536586 python3.9[145831]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:44:50 np0005536586 systemd[1]: Reloading.
Nov 26 07:44:50 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:44:50 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:44:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:44:51 np0005536586 python3.9[146020]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:51 np0005536586 python3.9[146098]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:51 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:52 np0005536586 python3.9[146250]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:52 np0005536586 python3.9[146328]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:52 np0005536586 python3.9[146480]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:44:52 np0005536586 systemd[1]: Reloading.
Nov 26 07:44:52 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:44:53 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:44:53 np0005536586 systemd[1]: Starting Create netns directory...
Nov 26 07:44:53 np0005536586 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 07:44:53 np0005536586 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 07:44:53 np0005536586 systemd[1]: Finished Create netns directory.
Nov 26 07:44:53 np0005536586 python3.9[146672]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:44:53 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:54 np0005536586 python3.9[146824]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:54 np0005536586 python3.9[146947]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161093.8254797-468-102601429851239/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:44:55 np0005536586 python3.9[147099]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:44:55 np0005536586 python3.9[147251]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:44:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:44:55 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:56 np0005536586 python3.9[147374]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161095.3653884-493-195409107116357/.source.json _original_basename=.hl_2tjq8 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:56 np0005536586 python3.9[147526]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:44:57 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:44:58 np0005536586 python3.9[147953]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 26 07:44:58 np0005536586 python3.9[148105]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 07:44:59 np0005536586 python3.9[148257]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 26 07:44:59 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:00 np0005536586 python3[148429]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 07:45:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:45:01 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:03 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:45:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:45:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:45:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:45:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:45:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:45:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:45:05 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:06 np0005536586 podman[148440]: 2025-11-26 12:45:06.476309946 +0000 UTC m=+5.698105012 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 26 07:45:06 np0005536586 podman[148536]: 2025-11-26 12:45:06.609343232 +0000 UTC m=+0.033224808 container create 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 07:45:06 np0005536586 podman[148536]: 2025-11-26 12:45:06.595229903 +0000 UTC m=+0.019111489 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 26 07:45:06 np0005536586 python3[148429]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 26 07:45:07 np0005536586 python3.9[148715]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:45:07 np0005536586 python3.9[148869]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:45:07 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:08 np0005536586 python3.9[148945]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:45:08 np0005536586 python3.9[149096]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764161108.1722167-581-75682255869537/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:45:09 np0005536586 python3.9[149172]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 07:45:09 np0005536586 systemd[1]: Reloading.
Nov 26 07:45:09 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:45:09 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:45:09 np0005536586 python3.9[149283]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:45:09 np0005536586 systemd[1]: Reloading.
Nov 26 07:45:09 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:45:09 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:45:10 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:10 np0005536586 systemd[1]: Starting ovn_controller container...
Nov 26 07:45:10 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:45:10 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8bf7b7e76557e3df4dcb603263dddbf8ea7838cd6ec0dda380d4162886aab8/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 26 07:45:10 np0005536586 systemd[1]: Started /usr/bin/podman healthcheck run 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0.
Nov 26 07:45:10 np0005536586 podman[149323]: 2025-11-26 12:45:10.24422118 +0000 UTC m=+0.118883259 container init 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: + sudo -E kolla_set_configs
Nov 26 07:45:10 np0005536586 podman[149323]: 2025-11-26 12:45:10.269692274 +0000 UTC m=+0.144354342 container start 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 07:45:10 np0005536586 edpm-start-podman-container[149323]: ovn_controller
Nov 26 07:45:10 np0005536586 systemd[1]: Created slice User Slice of UID 0.
Nov 26 07:45:10 np0005536586 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 26 07:45:10 np0005536586 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 26 07:45:10 np0005536586 systemd[1]: Starting User Manager for UID 0...
Nov 26 07:45:10 np0005536586 edpm-start-podman-container[149322]: Creating additional drop-in dependency for "ovn_controller" (4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0)
Nov 26 07:45:10 np0005536586 podman[149342]: 2025-11-26 12:45:10.345392117 +0000 UTC m=+0.064299059 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:45:10 np0005536586 systemd[1]: 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0-378689c647c93ae2.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 07:45:10 np0005536586 systemd[1]: 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0-378689c647c93ae2.service: Failed with result 'exit-code'.
Nov 26 07:45:10 np0005536586 systemd[1]: Reloading.
Nov 26 07:45:10 np0005536586 systemd[149363]: Queued start job for default target Main User Target.
Nov 26 07:45:10 np0005536586 systemd[149363]: Created slice User Application Slice.
Nov 26 07:45:10 np0005536586 systemd[149363]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 26 07:45:10 np0005536586 systemd[149363]: Started Daily Cleanup of User's Temporary Directories.
Nov 26 07:45:10 np0005536586 systemd[149363]: Reached target Paths.
Nov 26 07:45:10 np0005536586 systemd[149363]: Reached target Timers.
Nov 26 07:45:10 np0005536586 systemd[149363]: Starting D-Bus User Message Bus Socket...
Nov 26 07:45:10 np0005536586 systemd[149363]: Starting Create User's Volatile Files and Directories...
Nov 26 07:45:10 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:45:10 np0005536586 systemd[149363]: Listening on D-Bus User Message Bus Socket.
Nov 26 07:45:10 np0005536586 systemd[149363]: Reached target Sockets.
Nov 26 07:45:10 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:45:10 np0005536586 systemd[149363]: Finished Create User's Volatile Files and Directories.
Nov 26 07:45:10 np0005536586 systemd[149363]: Reached target Basic System.
Nov 26 07:45:10 np0005536586 systemd[149363]: Reached target Main User Target.
Nov 26 07:45:10 np0005536586 systemd[149363]: Startup finished in 129ms.
Nov 26 07:45:10 np0005536586 systemd[1]: Started User Manager for UID 0.
Nov 26 07:45:10 np0005536586 systemd[1]: Started ovn_controller container.
Nov 26 07:45:10 np0005536586 systemd[1]: Started Session c1 of User root.
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: INFO:__main__:Validating config file
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: INFO:__main__:Writing out command to execute
Nov 26 07:45:10 np0005536586 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: ++ cat /run_command
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: + ARGS=
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: + sudo kolla_copy_cacerts
Nov 26 07:45:10 np0005536586 systemd[1]: Started Session c2 of User root.
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: + [[ ! -n '' ]]
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: + . kolla_extend_start
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: + umask 0022
Nov 26 07:45:10 np0005536586 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 26 07:45:10 np0005536586 NetworkManager[49024]: <info>  [1764161110.7585] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 26 07:45:10 np0005536586 NetworkManager[49024]: <info>  [1764161110.7590] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 07:45:10 np0005536586 NetworkManager[49024]: <info>  [1764161110.7599] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 26 07:45:10 np0005536586 NetworkManager[49024]: <info>  [1764161110.7606] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 26 07:45:10 np0005536586 NetworkManager[49024]: <info>  [1764161110.7608] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 26 07:45:10 np0005536586 kernel: br-int: entered promiscuous mode
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 26 07:45:10 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:10Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 26 07:45:10 np0005536586 NetworkManager[49024]: <info>  [1764161110.7765] manager: (ovn-69681b-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 26 07:45:10 np0005536586 systemd-udevd[149497]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 07:45:10 np0005536586 systemd-udevd[149498]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 07:45:10 np0005536586 kernel: genev_sys_6081: entered promiscuous mode
Nov 26 07:45:10 np0005536586 NetworkManager[49024]: <info>  [1764161110.7957] device (genev_sys_6081): carrier: link connected
Nov 26 07:45:10 np0005536586 NetworkManager[49024]: <info>  [1764161110.7959] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 26 07:45:10 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:45:11 np0005536586 python3.9[149599]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:45:11 np0005536586 ovs-vsctl[149600]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 26 07:45:11 np0005536586 python3.9[149752]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:45:11 np0005536586 ovs-vsctl[149754]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 26 07:45:12 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:12 np0005536586 python3.9[149907]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:45:12 np0005536586 ovs-vsctl[149908]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 26 07:45:12 np0005536586 systemd[1]: session-45.scope: Deactivated successfully.
Nov 26 07:45:12 np0005536586 systemd[1]: session-45.scope: Consumed 44.799s CPU time.
Nov 26 07:45:12 np0005536586 systemd-logind[777]: Session 45 logged out. Waiting for processes to exit.
Nov 26 07:45:12 np0005536586 systemd-logind[777]: Removed session 45.
Nov 26 07:45:14 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:45:16 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:17 np0005536586 systemd-logind[777]: New session 47 of user zuul.
Nov 26 07:45:17 np0005536586 systemd[1]: Started Session 47 of User zuul.
Nov 26 07:45:18 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:18 np0005536586 python3.9[150086]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:45:19 np0005536586 python3.9[150242]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:45:20 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:20 np0005536586 python3.9[150394]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:45:20 np0005536586 python3.9[150546]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:45:20 np0005536586 systemd[1]: Stopping User Manager for UID 0...
Nov 26 07:45:20 np0005536586 systemd[149363]: Activating special unit Exit the Session...
Nov 26 07:45:20 np0005536586 systemd[149363]: Stopped target Main User Target.
Nov 26 07:45:20 np0005536586 systemd[149363]: Stopped target Basic System.
Nov 26 07:45:20 np0005536586 systemd[149363]: Stopped target Paths.
Nov 26 07:45:20 np0005536586 systemd[149363]: Stopped target Sockets.
Nov 26 07:45:20 np0005536586 systemd[149363]: Stopped target Timers.
Nov 26 07:45:20 np0005536586 systemd[149363]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 26 07:45:20 np0005536586 systemd[149363]: Closed D-Bus User Message Bus Socket.
Nov 26 07:45:20 np0005536586 systemd[149363]: Stopped Create User's Volatile Files and Directories.
Nov 26 07:45:20 np0005536586 systemd[149363]: Removed slice User Application Slice.
Nov 26 07:45:20 np0005536586 systemd[149363]: Reached target Shutdown.
Nov 26 07:45:20 np0005536586 systemd[149363]: Finished Exit the Session.
Nov 26 07:45:20 np0005536586 systemd[149363]: Reached target Exit the Session.
Nov 26 07:45:20 np0005536586 systemd[1]: user@0.service: Deactivated successfully.
Nov 26 07:45:20 np0005536586 systemd[1]: Stopped User Manager for UID 0.
Nov 26 07:45:20 np0005536586 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 26 07:45:20 np0005536586 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 26 07:45:20 np0005536586 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 26 07:45:20 np0005536586 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 26 07:45:20 np0005536586 systemd[1]: Removed slice User Slice of UID 0.
Nov 26 07:45:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:45:21 np0005536586 python3.9[150699]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:45:21 np0005536586 python3.9[150851]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:45:22 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:22 np0005536586 python3.9[151001]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:45:22 np0005536586 python3.9[151153]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 26 07:45:23 np0005536586 python3.9[151303]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:45:24 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:24 np0005536586 python3.9[151425]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161123.4477775-86-277372435760151/.source follow=False _original_basename=haproxy.j2 checksum=deae64da24ad28f71dc47276f2e9f268f19a4519 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:45:25 np0005536586 python3.9[151575]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:45:25 np0005536586 python3.9[151696]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161124.841485-101-121181581287861/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:45:25 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:45:26 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:26 np0005536586 python3.9[151848]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:45:26 np0005536586 python3.9[151932]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:45:28 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:28 np0005536586 python3.9[152085]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 07:45:29 np0005536586 python3.9[152239]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:45:30 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:30 np0005536586 python3.9[152361]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161128.8275735-138-72786503374940/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:45:30 np0005536586 python3.9[152511]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:45:30 np0005536586 python3.9[152632]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161130.1160903-138-89717683808497/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:45:30 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:45:31 np0005536586 python3.9[152782]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:45:31 np0005536586 python3.9[152903]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161131.3148656-182-172325717945183/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:45:32 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:32 np0005536586 python3.9[153053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:45:32 np0005536586 python3.9[153174]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161132.0942104-182-215787693767612/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:45:33 np0005536586 python3.9[153324]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:45:33 np0005536586 python3.9[153478]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:45:34 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:34 np0005536586 python3.9[153630]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:45:34 np0005536586 python3.9[153708]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:45:34 np0005536586 python3.9[153860]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:45:35 np0005536586 python3.9[153938]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:45:35 np0005536586 python3.9[154090]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:45:35
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'volumes']
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:45:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:45:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:45:36 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:36 np0005536586 python3.9[154242]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:45:36 np0005536586 python3.9[154320]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:45:37 np0005536586 python3.9[154472]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:45:37 np0005536586 python3.9[154550]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:45:37 np0005536586 python3.9[154702]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:45:37 np0005536586 systemd[1]: Reloading.
Nov 26 07:45:37 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:45:37 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:45:38 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:38 np0005536586 python3.9[154998]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:45:38 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:45:38 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:45:38 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:45:38 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:45:38 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:45:38 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:45:38 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 61b6c5b2-25a2-43ab-b8c6-75cc96a67ede does not exist
Nov 26 07:45:38 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 11ba02a1-5d8e-4791-a340-e642c3bc4467 does not exist
Nov 26 07:45:38 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 9682eea2-873f-490f-b2d2-af8d9b33b46a does not exist
Nov 26 07:45:38 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:45:38 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:45:38 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:45:38 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:45:38 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:45:38 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:45:38 np0005536586 python3.9[155145]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:45:39 np0005536586 podman[155274]: 2025-11-26 12:45:39.150964665 +0000 UTC m=+0.029123228 container create 14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 26 07:45:39 np0005536586 systemd[1]: Started libpod-conmon-14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd.scope.
Nov 26 07:45:39 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:45:39 np0005536586 podman[155274]: 2025-11-26 12:45:39.208023608 +0000 UTC m=+0.086182181 container init 14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 07:45:39 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:45:39 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:45:39 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:45:39 np0005536586 podman[155274]: 2025-11-26 12:45:39.214609651 +0000 UTC m=+0.092768214 container start 14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 07:45:39 np0005536586 podman[155274]: 2025-11-26 12:45:39.215635343 +0000 UTC m=+0.093793896 container attach 14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:45:39 np0005536586 beautiful_chaplygin[155316]: 167 167
Nov 26 07:45:39 np0005536586 systemd[1]: libpod-14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd.scope: Deactivated successfully.
Nov 26 07:45:39 np0005536586 conmon[155316]: conmon 14a644ea5c943545d673 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd.scope/container/memory.events
Nov 26 07:45:39 np0005536586 podman[155274]: 2025-11-26 12:45:39.219587211 +0000 UTC m=+0.097745764 container died 14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:45:39 np0005536586 systemd[1]: var-lib-containers-storage-overlay-e59acd8c946a00c7559c77f7fb4e98986f5b8899d2aeab0bc90b3f771dd3995a-merged.mount: Deactivated successfully.
Nov 26 07:45:39 np0005536586 podman[155274]: 2025-11-26 12:45:39.140098957 +0000 UTC m=+0.018257530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:45:39 np0005536586 podman[155274]: 2025-11-26 12:45:39.244228189 +0000 UTC m=+0.122386742 container remove 14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:45:39 np0005536586 systemd[1]: libpod-conmon-14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd.scope: Deactivated successfully.
Nov 26 07:45:39 np0005536586 podman[155412]: 2025-11-26 12:45:39.368925624 +0000 UTC m=+0.031498903 container create b868a5ddaf569f591cd3c793acbb9ada4ac4bccac591adf9850f44fc437351c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 07:45:39 np0005536586 systemd[1]: Started libpod-conmon-b868a5ddaf569f591cd3c793acbb9ada4ac4bccac591adf9850f44fc437351c3.scope.
Nov 26 07:45:39 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:45:39 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/430cf909bbfb6cea5aec98e401c585242c07c923cf5ada4bc2d98d7a7d61d6db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:45:39 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/430cf909bbfb6cea5aec98e401c585242c07c923cf5ada4bc2d98d7a7d61d6db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:45:39 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/430cf909bbfb6cea5aec98e401c585242c07c923cf5ada4bc2d98d7a7d61d6db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:45:39 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/430cf909bbfb6cea5aec98e401c585242c07c923cf5ada4bc2d98d7a7d61d6db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:45:39 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/430cf909bbfb6cea5aec98e401c585242c07c923cf5ada4bc2d98d7a7d61d6db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:45:39 np0005536586 podman[155412]: 2025-11-26 12:45:39.427926056 +0000 UTC m=+0.090499336 container init b868a5ddaf569f591cd3c793acbb9ada4ac4bccac591adf9850f44fc437351c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:45:39 np0005536586 podman[155412]: 2025-11-26 12:45:39.435982951 +0000 UTC m=+0.098556230 container start b868a5ddaf569f591cd3c793acbb9ada4ac4bccac591adf9850f44fc437351c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:45:39 np0005536586 podman[155412]: 2025-11-26 12:45:39.437861392 +0000 UTC m=+0.100434670 container attach b868a5ddaf569f591cd3c793acbb9ada4ac4bccac591adf9850f44fc437351c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 07:45:39 np0005536586 podman[155412]: 2025-11-26 12:45:39.356323334 +0000 UTC m=+0.018896613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:45:39 np0005536586 python3.9[155415]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:45:39 np0005536586 python3.9[155509]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:45:40 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:40 np0005536586 modest_mclaren[155427]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:45:40 np0005536586 modest_mclaren[155427]: --> relative data size: 1.0
Nov 26 07:45:40 np0005536586 modest_mclaren[155427]: --> All data devices are unavailable
Nov 26 07:45:40 np0005536586 systemd[1]: libpod-b868a5ddaf569f591cd3c793acbb9ada4ac4bccac591adf9850f44fc437351c3.scope: Deactivated successfully.
Nov 26 07:45:40 np0005536586 podman[155412]: 2025-11-26 12:45:40.263843899 +0000 UTC m=+0.926417178 container died b868a5ddaf569f591cd3c793acbb9ada4ac4bccac591adf9850f44fc437351c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:45:40 np0005536586 systemd[1]: var-lib-containers-storage-overlay-430cf909bbfb6cea5aec98e401c585242c07c923cf5ada4bc2d98d7a7d61d6db-merged.mount: Deactivated successfully.
Nov 26 07:45:40 np0005536586 podman[155412]: 2025-11-26 12:45:40.298105638 +0000 UTC m=+0.960678917 container remove b868a5ddaf569f591cd3c793acbb9ada4ac4bccac591adf9850f44fc437351c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 07:45:40 np0005536586 systemd[1]: libpod-conmon-b868a5ddaf569f591cd3c793acbb9ada4ac4bccac591adf9850f44fc437351c3.scope: Deactivated successfully.
Nov 26 07:45:40 np0005536586 python3.9[155675]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:45:40 np0005536586 systemd[1]: Reloading.
Nov 26 07:45:40 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:40Z|00025|memory|INFO|16128 kB peak resident set size after 29.7 seconds
Nov 26 07:45:40 np0005536586 ovn_controller[149335]: 2025-11-26T12:45:40Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 26 07:45:40 np0005536586 podman[155719]: 2025-11-26 12:45:40.487223892 +0000 UTC m=+0.098817092 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 26 07:45:40 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:45:40 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:45:40 np0005536586 systemd[1]: Starting Create netns directory...
Nov 26 07:45:40 np0005536586 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 07:45:40 np0005536586 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 07:45:40 np0005536586 systemd[1]: Finished Create netns directory.
Nov 26 07:45:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:45:41 np0005536586 podman[155967]: 2025-11-26 12:45:41.000570236 +0000 UTC m=+0.030150382 container create c2f2bcd8bf0eb4a86174efe675fd3c29f3a9f743f420b515617b81e2c3f61455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leakey, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 07:45:41 np0005536586 systemd[1]: Started libpod-conmon-c2f2bcd8bf0eb4a86174efe675fd3c29f3a9f743f420b515617b81e2c3f61455.scope.
Nov 26 07:45:41 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:45:41 np0005536586 podman[155967]: 2025-11-26 12:45:41.054157957 +0000 UTC m=+0.083738104 container init c2f2bcd8bf0eb4a86174efe675fd3c29f3a9f743f420b515617b81e2c3f61455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leakey, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 07:45:41 np0005536586 podman[155967]: 2025-11-26 12:45:41.058976378 +0000 UTC m=+0.088556525 container start c2f2bcd8bf0eb4a86174efe675fd3c29f3a9f743f420b515617b81e2c3f61455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:45:41 np0005536586 podman[155967]: 2025-11-26 12:45:41.062402424 +0000 UTC m=+0.091982581 container attach c2f2bcd8bf0eb4a86174efe675fd3c29f3a9f743f420b515617b81e2c3f61455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leakey, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 07:45:41 np0005536586 heuristic_leakey[156007]: 167 167
Nov 26 07:45:41 np0005536586 podman[155967]: 2025-11-26 12:45:41.063090712 +0000 UTC m=+0.092670848 container died c2f2bcd8bf0eb4a86174efe675fd3c29f3a9f743f420b515617b81e2c3f61455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leakey, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:45:41 np0005536586 systemd[1]: libpod-c2f2bcd8bf0eb4a86174efe675fd3c29f3a9f743f420b515617b81e2c3f61455.scope: Deactivated successfully.
Nov 26 07:45:41 np0005536586 systemd[1]: var-lib-containers-storage-overlay-e4700f2ad6f949246b3e687a2f29172c42f1f58d78d1a289a60cbd13a59590c0-merged.mount: Deactivated successfully.
Nov 26 07:45:41 np0005536586 podman[155967]: 2025-11-26 12:45:41.084099064 +0000 UTC m=+0.113679212 container remove c2f2bcd8bf0eb4a86174efe675fd3c29f3a9f743f420b515617b81e2c3f61455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leakey, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:45:41 np0005536586 podman[155967]: 2025-11-26 12:45:40.989148069 +0000 UTC m=+0.018728236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:45:41 np0005536586 systemd[1]: libpod-conmon-c2f2bcd8bf0eb4a86174efe675fd3c29f3a9f743f420b515617b81e2c3f61455.scope: Deactivated successfully.
Nov 26 07:45:41 np0005536586 podman[156077]: 2025-11-26 12:45:41.209105332 +0000 UTC m=+0.029120223 container create 80c223153adf5cd2b0d252881069c998efc3ea61f663da386d2c3c8ebaad0031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_albattani, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 07:45:41 np0005536586 systemd[1]: Started libpod-conmon-80c223153adf5cd2b0d252881069c998efc3ea61f663da386d2c3c8ebaad0031.scope.
Nov 26 07:45:41 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:45:41 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2b1b476f53562a8b3f452d94c26e7805b9ff9d714e45ef3747fe638e1c955f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:45:41 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2b1b476f53562a8b3f452d94c26e7805b9ff9d714e45ef3747fe638e1c955f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:45:41 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2b1b476f53562a8b3f452d94c26e7805b9ff9d714e45ef3747fe638e1c955f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:45:41 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2b1b476f53562a8b3f452d94c26e7805b9ff9d714e45ef3747fe638e1c955f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:45:41 np0005536586 python3.9[156071]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:45:41 np0005536586 podman[156077]: 2025-11-26 12:45:41.276083478 +0000 UTC m=+0.096098359 container init 80c223153adf5cd2b0d252881069c998efc3ea61f663da386d2c3c8ebaad0031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_albattani, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 07:45:41 np0005536586 podman[156077]: 2025-11-26 12:45:41.281750568 +0000 UTC m=+0.101765450 container start 80c223153adf5cd2b0d252881069c998efc3ea61f663da386d2c3c8ebaad0031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 07:45:41 np0005536586 podman[156077]: 2025-11-26 12:45:41.283225488 +0000 UTC m=+0.103240369 container attach 80c223153adf5cd2b0d252881069c998efc3ea61f663da386d2c3c8ebaad0031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_albattani, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 07:45:41 np0005536586 podman[156077]: 2025-11-26 12:45:41.196943812 +0000 UTC m=+0.016958713 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:45:41 np0005536586 python3.9[156246]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:45:41 np0005536586 tender_albattani[156090]: {
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:    "0": [
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:        {
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "devices": [
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "/dev/loop3"
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            ],
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "lv_name": "ceph_lv0",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "lv_size": "21470642176",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "name": "ceph_lv0",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "tags": {
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.cluster_name": "ceph",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.crush_device_class": "",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.encrypted": "0",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.osd_id": "0",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.type": "block",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.vdo": "0"
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            },
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "type": "block",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "vg_name": "ceph_vg0"
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:        }
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:    ],
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:    "1": [
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:        {
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "devices": [
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "/dev/loop4"
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            ],
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "lv_name": "ceph_lv1",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "lv_size": "21470642176",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "name": "ceph_lv1",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "tags": {
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.cluster_name": "ceph",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.crush_device_class": "",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.encrypted": "0",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.osd_id": "1",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.type": "block",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.vdo": "0"
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            },
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "type": "block",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "vg_name": "ceph_vg1"
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:        }
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:    ],
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:    "2": [
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:        {
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "devices": [
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "/dev/loop5"
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            ],
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "lv_name": "ceph_lv2",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "lv_size": "21470642176",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "name": "ceph_lv2",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "tags": {
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.cluster_name": "ceph",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.crush_device_class": "",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.encrypted": "0",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.osd_id": "2",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.type": "block",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:                "ceph.vdo": "0"
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            },
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "type": "block",
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:            "vg_name": "ceph_vg2"
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:        }
Nov 26 07:45:41 np0005536586 tender_albattani[156090]:    ]
Nov 26 07:45:41 np0005536586 tender_albattani[156090]: }
Nov 26 07:45:41 np0005536586 systemd[1]: libpod-80c223153adf5cd2b0d252881069c998efc3ea61f663da386d2c3c8ebaad0031.scope: Deactivated successfully.
Nov 26 07:45:41 np0005536586 podman[156077]: 2025-11-26 12:45:41.920879678 +0000 UTC m=+0.740894549 container died 80c223153adf5cd2b0d252881069c998efc3ea61f663da386d2c3c8ebaad0031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_albattani, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:45:41 np0005536586 systemd[1]: var-lib-containers-storage-overlay-4b2b1b476f53562a8b3f452d94c26e7805b9ff9d714e45ef3747fe638e1c955f-merged.mount: Deactivated successfully.
Nov 26 07:45:41 np0005536586 podman[156077]: 2025-11-26 12:45:41.955313105 +0000 UTC m=+0.775327987 container remove 80c223153adf5cd2b0d252881069c998efc3ea61f663da386d2c3c8ebaad0031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_albattani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 07:45:41 np0005536586 systemd[1]: libpod-conmon-80c223153adf5cd2b0d252881069c998efc3ea61f663da386d2c3c8ebaad0031.scope: Deactivated successfully.
Nov 26 07:45:42 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:42 np0005536586 python3.9[156400]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161141.4068146-333-180842987639987/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:45:42 np0005536586 podman[156540]: 2025-11-26 12:45:42.391489806 +0000 UTC m=+0.027686152 container create 03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:45:42 np0005536586 systemd[1]: Started libpod-conmon-03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c.scope.
Nov 26 07:45:42 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:45:42 np0005536586 podman[156540]: 2025-11-26 12:45:42.446040769 +0000 UTC m=+0.082237115 container init 03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:45:42 np0005536586 podman[156540]: 2025-11-26 12:45:42.450589372 +0000 UTC m=+0.086785718 container start 03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:45:42 np0005536586 podman[156540]: 2025-11-26 12:45:42.45167072 +0000 UTC m=+0.087867066 container attach 03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:45:42 np0005536586 pensive_mayer[156553]: 167 167
Nov 26 07:45:42 np0005536586 systemd[1]: libpod-03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c.scope: Deactivated successfully.
Nov 26 07:45:42 np0005536586 conmon[156553]: conmon 03d68ddb6bc9274b5e60 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c.scope/container/memory.events
Nov 26 07:45:42 np0005536586 podman[156540]: 2025-11-26 12:45:42.455569258 +0000 UTC m=+0.091765604 container died 03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:45:42 np0005536586 systemd[1]: var-lib-containers-storage-overlay-e273a87e39c1d96a9fb56585217f31f40c70045bd455fd408c4cf00223ed0dba-merged.mount: Deactivated successfully.
Nov 26 07:45:42 np0005536586 podman[156540]: 2025-11-26 12:45:42.471552893 +0000 UTC m=+0.107749238 container remove 03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:45:42 np0005536586 podman[156540]: 2025-11-26 12:45:42.379884444 +0000 UTC m=+0.016080810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:45:42 np0005536586 systemd[1]: libpod-conmon-03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c.scope: Deactivated successfully.
Nov 26 07:45:42 np0005536586 podman[156598]: 2025-11-26 12:45:42.588966694 +0000 UTC m=+0.027916826 container create f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 07:45:42 np0005536586 systemd[1]: Started libpod-conmon-f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8.scope.
Nov 26 07:45:42 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:45:42 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96761291be7dfdea5367dc202a0a91abf0b4e1de4a8bb4e140a0d8689ce86fa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:45:42 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96761291be7dfdea5367dc202a0a91abf0b4e1de4a8bb4e140a0d8689ce86fa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:45:42 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96761291be7dfdea5367dc202a0a91abf0b4e1de4a8bb4e140a0d8689ce86fa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:45:42 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96761291be7dfdea5367dc202a0a91abf0b4e1de4a8bb4e140a0d8689ce86fa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:45:42 np0005536586 podman[156598]: 2025-11-26 12:45:42.642511912 +0000 UTC m=+0.081462064 container init f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lalande, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:45:42 np0005536586 podman[156598]: 2025-11-26 12:45:42.64842666 +0000 UTC m=+0.087376792 container start f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 07:45:42 np0005536586 podman[156598]: 2025-11-26 12:45:42.649435121 +0000 UTC m=+0.088385253 container attach f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 07:45:42 np0005536586 podman[156598]: 2025-11-26 12:45:42.57809274 +0000 UTC m=+0.017042892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:45:42 np0005536586 python3.9[156721]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:45:43 np0005536586 python3.9[156875]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]: {
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:        "osd_id": 1,
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:        "type": "bluestore"
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:    },
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:        "osd_id": 2,
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:        "type": "bluestore"
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:    },
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:        "osd_id": 0,
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:        "type": "bluestore"
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]:    }
Nov 26 07:45:43 np0005536586 heuristic_lalande[156641]: }
Nov 26 07:45:43 np0005536586 systemd[1]: libpod-f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8.scope: Deactivated successfully.
Nov 26 07:45:43 np0005536586 conmon[156641]: conmon f0f078753f48766653bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8.scope/container/memory.events
Nov 26 07:45:43 np0005536586 podman[156598]: 2025-11-26 12:45:43.433460067 +0000 UTC m=+0.872410269 container died f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 26 07:45:43 np0005536586 systemd[1]: var-lib-containers-storage-overlay-96761291be7dfdea5367dc202a0a91abf0b4e1de4a8bb4e140a0d8689ce86fa9-merged.mount: Deactivated successfully.
Nov 26 07:45:43 np0005536586 podman[156598]: 2025-11-26 12:45:43.466509695 +0000 UTC m=+0.905459828 container remove f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 07:45:43 np0005536586 systemd[1]: libpod-conmon-f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8.scope: Deactivated successfully.
Nov 26 07:45:43 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:45:43 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:45:43 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:45:43 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:45:43 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev c5023bd2-292f-43da-93b6-00d49925100e does not exist
Nov 26 07:45:43 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev ba7b5c30-9bb6-49b0-98f7-f47a3705e021 does not exist
Nov 26 07:45:43 np0005536586 python3.9[157085]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161143.0413847-358-71446052308829/.source.json _original_basename=.e4q8diq7 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:45:44 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:44 np0005536586 python3.9[157237]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:45:44 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:45:44 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:45:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 07:45:45 np0005536586 python3.9[157664]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 26 07:45:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:45:46 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:46 np0005536586 python3.9[157816]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 07:45:47 np0005536586 python3.9[157968]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 26 07:45:48 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:48 np0005536586 python3[158138]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 07:45:50 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:45:52 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:54 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:45:56 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:56 np0005536586 podman[158149]: 2025-11-26 12:45:56.537385797 +0000 UTC m=+8.105951629 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 26 07:45:56 np0005536586 podman[158248]: 2025-11-26 12:45:56.638647477 +0000 UTC m=+0.029763568 container create 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 26 07:45:56 np0005536586 podman[158248]: 2025-11-26 12:45:56.624554044 +0000 UTC m=+0.015670145 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 26 07:45:56 np0005536586 python3[158138]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 26 07:45:57 np0005536586 python3.9[158426]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:45:57 np0005536586 python3.9[158580]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:45:58 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:45:58 np0005536586 python3.9[158656]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:45:58 np0005536586 python3.9[158807]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764161158.2123039-446-272438720783303/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:45:59 np0005536586 python3.9[158883]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 07:45:59 np0005536586 systemd[1]: Reloading.
Nov 26 07:45:59 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:45:59 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:45:59 np0005536586 python3.9[158994]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:45:59 np0005536586 systemd[1]: Reloading.
Nov 26 07:45:59 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:45:59 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:45:59 np0005536586 systemd[1]: Starting ovn_metadata_agent container...
Nov 26 07:46:00 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:00 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:46:00 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a095883700a37a5a884e5aec0798798d800e840306049128f2d208c384baa0/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 26 07:46:00 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a095883700a37a5a884e5aec0798798d800e840306049128f2d208c384baa0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 07:46:00 np0005536586 systemd[1]: Started /usr/bin/podman healthcheck run 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff.
Nov 26 07:46:00 np0005536586 podman[159035]: 2025-11-26 12:46:00.09441738 +0000 UTC m=+0.082748179 container init 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: + sudo -E kolla_set_configs
Nov 26 07:46:00 np0005536586 podman[159035]: 2025-11-26 12:46:00.112548832 +0000 UTC m=+0.100879610 container start 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 26 07:46:00 np0005536586 edpm-start-podman-container[159035]: ovn_metadata_agent
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: INFO:__main__:Validating config file
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: INFO:__main__:Copying service configuration files
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: INFO:__main__:Writing out command to execute
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 26 07:46:00 np0005536586 podman[159055]: 2025-11-26 12:46:00.160348752 +0000 UTC m=+0.041162601 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: ++ cat /run_command
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: + CMD=neutron-ovn-metadata-agent
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: + ARGS=
Nov 26 07:46:00 np0005536586 edpm-start-podman-container[159034]: Creating additional drop-in dependency for "ovn_metadata_agent" (5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff)
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: + sudo kolla_copy_cacerts
Nov 26 07:46:00 np0005536586 systemd[1]: Reloading.
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: + [[ ! -n '' ]]
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: + . kolla_extend_start
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: + umask 0022
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: + exec neutron-ovn-metadata-agent
Nov 26 07:46:00 np0005536586 ovn_metadata_agent[159048]: Running command: 'neutron-ovn-metadata-agent'
Nov 26 07:46:00 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:46:00 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:46:00 np0005536586 systemd[1]: Started ovn_metadata_agent container.
Nov 26 07:46:00 np0005536586 systemd-logind[777]: Session 47 logged out. Waiting for processes to exit.
Nov 26 07:46:00 np0005536586 systemd[1]: session-47.scope: Deactivated successfully.
Nov 26 07:46:00 np0005536586 systemd[1]: session-47.scope: Consumed 40.972s CPU time.
Nov 26 07:46:00 np0005536586 systemd-logind[777]: Removed session 47.
Nov 26 07:46:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.679 159053 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.679 159053 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.679 159053 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.680 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.680 159053 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.680 159053 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.680 159053 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.680 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.680 159053 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.680 159053 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.683 159053 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.683 159053 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.683 159053 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.683 159053 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.683 159053 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.683 159053 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.683 159053 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.683 159053 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.683 159053 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.685 159053 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.685 159053 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.685 159053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.685 159053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.685 159053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.685 159053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.685 159053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.685 159053 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.685 159053 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.686 159053 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.686 159053 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.686 159053 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.686 159053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.686 159053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.686 159053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.686 159053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.686 159053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.686 159053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.688 159053 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.688 159053 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.688 159053 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.688 159053 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.688 159053 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.688 159053 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.688 159053 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.688 159053 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.688 159053 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.691 159053 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.691 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.691 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.691 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.691 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.691 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.691 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.691 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.691 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.692 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.692 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.692 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.692 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.692 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.692 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.692 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.692 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.692 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.693 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.693 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.693 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.693 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.693 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.693 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.693 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.693 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.693 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.694 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.694 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.694 159053 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.694 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.694 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.694 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.694 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.694 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.694 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.696 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.696 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.696 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.696 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.696 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.696 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.696 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.696 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.696 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.699 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.699 159053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.699 159053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.699 159053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.699 159053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.699 159053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.699 159053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.699 159053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.699 159053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.702 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.702 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.702 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.702 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.702 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.702 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.702 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.702 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.702 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.706 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.706 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.706 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.706 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.706 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.706 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.706 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.706 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.706 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.707 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.707 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.707 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.707 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.707 159053 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.707 159053 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.707 159053 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.707 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.707 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.710 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.710 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.710 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.710 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.710 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.710 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.710 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.710 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.710 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.711 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.711 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.711 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.711 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.711 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.711 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.711 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.718 159053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.719 159053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.719 159053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.719 159053 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.719 159053 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.729 159053 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 1a132c77-5dda-4b90-923d-26a448f3fef6 (UUID: 1a132c77-5dda-4b90-923d-26a448f3fef6) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.751 159053 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.751 159053 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.751 159053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.751 159053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.754 159053 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.759 159053 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.764 159053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '1a132c77-5dda-4b90-923d-26a448f3fef6'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f12de3dbbe0>], external_ids={}, name=1a132c77-5dda-4b90-923d-26a448f3fef6, nb_cfg_timestamp=1764161118780, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.765 159053 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f12de35ee80>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.765 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.765 159053 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.766 159053 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.766 159053 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.770 159053 DEBUG oslo_service.service [-] Started child 159155 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.773 159053 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp4ovitr4s/privsep.sock']#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.773 159155 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-429504'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.789 159155 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.790 159155 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.790 159155 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.792 159155 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.798 159155 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 26 07:46:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.802 159155 INFO eventlet.wsgi.server [-] (159155) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 26 07:46:02 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:02 np0005536586 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 26 07:46:02 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.299 159053 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 26 07:46:02 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.300 159053 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp4ovitr4s/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 26 07:46:02 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.219 159160 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 26 07:46:02 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.222 159160 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 26 07:46:02 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.224 159160 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 26 07:46:02 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.224 159160 INFO oslo.privsep.daemon [-] privsep daemon running as pid 159160#033[00m
Nov 26 07:46:02 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.302 159160 DEBUG oslo.privsep.daemon [-] privsep: reply[6c02a2e4-c82b-4e22-8f1e-b054bc3d796f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 07:46:02 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.702 159160 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:46:02 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.702 159160 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:46:02 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.703 159160 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.141 159160 DEBUG oslo.privsep.daemon [-] privsep: reply[ad09efbc-dfbf-4b65-b3f1-b717b74bab22]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.144 159053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=1a132c77-5dda-4b90-923d-26a448f3fef6, column=external_ids, values=({'neutron:ovn-metadata-id': '4fda91b0-bfd4-5361-9fc0-dd5f70601ca4'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.151 159053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1a132c77-5dda-4b90-923d-26a448f3fef6, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.157 159053 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.157 159053 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.157 159053 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.157 159053 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.158 159053 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.158 159053 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.158 159053 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.158 159053 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.158 159053 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.158 159053 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.158 159053 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.158 159053 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.159 159053 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.159 159053 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.159 159053 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.159 159053 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.159 159053 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.159 159053 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.159 159053 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.159 159053 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.159 159053 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.160 159053 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.160 159053 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.160 159053 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.160 159053 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.160 159053 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.160 159053 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.160 159053 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.160 159053 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.160 159053 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.161 159053 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.161 159053 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.161 159053 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.161 159053 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.161 159053 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.161 159053 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.161 159053 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.161 159053 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.162 159053 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.162 159053 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.162 159053 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.162 159053 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.162 159053 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.162 159053 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.162 159053 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.162 159053 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.162 159053 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.166 159053 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.166 159053 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.166 159053 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.166 159053 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.166 159053 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.166 159053 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.166 159053 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.166 159053 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.166 159053 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.167 159053 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.167 159053 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.167 159053 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.167 159053 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.167 159053 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.167 159053 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.167 159053 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.167 159053 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.167 159053 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.168 159053 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.168 159053 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.168 159053 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.168 159053 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.168 159053 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.168 159053 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.168 159053 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.168 159053 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.168 159053 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.169 159053 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.169 159053 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.169 159053 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.169 159053 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.169 159053 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.169 159053 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.169 159053 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.169 159053 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.170 159053 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.170 159053 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.170 159053 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.170 159053 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.170 159053 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.170 159053 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.170 159053 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.170 159053 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.171 159053 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.171 159053 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.171 159053 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.171 159053 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.171 159053 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.171 159053 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.171 159053 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.171 159053 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.185 159053 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.185 159053 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.185 159053 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.185 159053 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.185 159053 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.185 159053 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.185 159053 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.185 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.185 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.186 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.186 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.186 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.186 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.186 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.186 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.186 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.186 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.189 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.189 159053 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.189 159053 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.189 159053 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.189 159053 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:46:03 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.189 159053 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 26 07:46:04 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:46:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:46:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:46:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:46:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:46:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:46:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:46:06 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:06 np0005536586 systemd-logind[777]: New session 48 of user zuul.
Nov 26 07:46:06 np0005536586 systemd[1]: Started Session 48 of User zuul.
Nov 26 07:46:07 np0005536586 python3.9[159318]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:46:08 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:08 np0005536586 python3.9[159474]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:46:09 np0005536586 python3.9[159635]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 07:46:09 np0005536586 systemd[1]: Reloading.
Nov 26 07:46:09 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:46:09 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:46:09 np0005536586 python3.9[159820]: ansible-ansible.builtin.service_facts Invoked
Nov 26 07:46:09 np0005536586 network[159837]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 07:46:09 np0005536586 network[159838]: 'network-scripts' will be removed from distribution in near future.
Nov 26 07:46:09 np0005536586 network[159839]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 07:46:10 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:10 np0005536586 podman[159861]: 2025-11-26 12:46:10.7523643 +0000 UTC m=+0.065009186 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 07:46:10 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:46:12 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:12 np0005536586 python3.9[160125]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:46:12 np0005536586 python3.9[160278]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:46:13 np0005536586 python3.9[160431]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:46:13 np0005536586 python3.9[160584]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:46:14 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:14 np0005536586 python3.9[160737]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:46:15 np0005536586 python3.9[160890]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:46:15 np0005536586 python3.9[161043]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:46:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:46:16 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:16 np0005536586 python3.9[161196]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:46:16 np0005536586 python3.9[161348]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:46:17 np0005536586 python3.9[161500]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:46:17 np0005536586 python3.9[161652]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:46:18 np0005536586 python3.9[161804]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:46:18 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:18 np0005536586 python3.9[161956]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:46:18 np0005536586 python3.9[162108]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:46:19 np0005536586 python3.9[162260]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:46:19 np0005536586 python3.9[162412]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:46:20 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:20 np0005536586 python3.9[162564]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:46:20 np0005536586 python3.9[162716]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:46:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:46:21 np0005536586 python3.9[162868]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:46:21 np0005536586 python3.9[163020]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:46:22 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:22 np0005536586 python3.9[163172]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:46:22 np0005536586 python3.9[163324]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:46:23 np0005536586 python3.9[163476]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 07:46:23 np0005536586 python3.9[163628]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 07:46:23 np0005536586 systemd[1]: Reloading.
Nov 26 07:46:23 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:46:23 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:46:24 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:24 np0005536586 python3.9[163815]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:46:24 np0005536586 python3.9[163968]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:46:25 np0005536586 python3.9[164121]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:46:25 np0005536586 python3.9[164274]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:25.994652) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161185994680, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1461, "num_deletes": 250, "total_data_size": 2297655, "memory_usage": 2326888, "flush_reason": "Manual Compaction"}
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161185998485, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1318929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7385, "largest_seqno": 8845, "table_properties": {"data_size": 1314037, "index_size": 2224, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12650, "raw_average_key_size": 19, "raw_value_size": 1303089, "raw_average_value_size": 2039, "num_data_blocks": 106, "num_entries": 639, "num_filter_entries": 639, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764161030, "oldest_key_time": 1764161030, "file_creation_time": 1764161185, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 3857 microseconds, and 2889 cpu microseconds.
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:25.998512) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1318929 bytes OK
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:25.998522) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:25.998813) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:25.998823) EVENT_LOG_v1 {"time_micros": 1764161185998820, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:25.998833) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2291234, prev total WAL file size 2291234, number of live WAL files 2.
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:25.999333) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1288KB)], [20(7417KB)]
Nov 26 07:46:25 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161185999416, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8914114, "oldest_snapshot_seqno": -1}
Nov 26 07:46:26 np0005536586 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3322 keys, 6858498 bytes, temperature: kUnknown
Nov 26 07:46:26 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161186013960, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6858498, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6833231, "index_size": 15878, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 79678, "raw_average_key_size": 23, "raw_value_size": 6770109, "raw_average_value_size": 2037, "num_data_blocks": 705, "num_entries": 3322, "num_filter_entries": 3322, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160613, "oldest_key_time": 0, "file_creation_time": 1764161185, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:46:26 np0005536586 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 07:46:26 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:26.014090) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6858498 bytes
Nov 26 07:46:26 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:26.014426) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 612.1 rd, 470.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.2 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(12.0) write-amplify(5.2) OK, records in: 3762, records dropped: 440 output_compression: NoCompression
Nov 26 07:46:26 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:26.014443) EVENT_LOG_v1 {"time_micros": 1764161186014436, "job": 6, "event": "compaction_finished", "compaction_time_micros": 14564, "compaction_time_cpu_micros": 11592, "output_level": 6, "num_output_files": 1, "total_output_size": 6858498, "num_input_records": 3762, "num_output_records": 3322, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 07:46:26 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:46:26 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161186014686, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 26 07:46:26 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:46:26 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161186015666, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 26 07:46:26 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:25.999233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:46:26 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:26.015701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:46:26 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:26.015704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:46:26 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:26.015705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:46:26 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:26.015707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:46:26 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:26.015708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:46:26 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:26 np0005536586 python3.9[164427]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:46:26 np0005536586 python3.9[164580]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:46:27 np0005536586 python3.9[164733]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:46:27 np0005536586 python3.9[164886]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 26 07:46:28 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:28 np0005536586 python3.9[165039]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 07:46:29 np0005536586 python3.9[165197]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 26 07:46:29 np0005536586 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 07:46:29 np0005536586 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 07:46:29 np0005536586 python3.9[165358]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:46:30 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:30 np0005536586 podman[165414]: 2025-11-26 12:46:30.349267235 +0000 UTC m=+0.040942727 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 07:46:30 np0005536586 python3.9[165459]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:46:30 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:46:32 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:34 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:46:35
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'default.rgw.control', 'volumes', 'images', 'cephfs.cephfs.meta', 'backups']
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:46:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:46:35 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:46:36 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:38 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:40 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:40 np0005536586 podman[165644]: 2025-11-26 12:46:40.903625108 +0000 UTC m=+0.066935215 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 26 07:46:40 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:46:42 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:44 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:44 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:46:44 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:46:44 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:46:44 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:46:44 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:46:44 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:46:44 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:46:44 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 65762dfa-a4b1-4b93-ad1a-9b7c872f2b65 does not exist
Nov 26 07:46:44 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 82127588-d9e2-4329-b4ef-ed610a1a083d does not exist
Nov 26 07:46:44 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 67bdc3bb-6ca8-4120-a4c2-50bb68709fde does not exist
Nov 26 07:46:44 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:46:44 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:46:44 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:46:44 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:46:44 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:46:44 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:46:44 np0005536586 podman[165934]: 2025-11-26 12:46:44.519721531 +0000 UTC m=+0.031419017 container create 0648ed0b3aca497afb5c4840d1d5f548b5526262e7596131116bb4b854597449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 07:46:44 np0005536586 systemd[1]: Started libpod-conmon-0648ed0b3aca497afb5c4840d1d5f548b5526262e7596131116bb4b854597449.scope.
Nov 26 07:46:44 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:46:44 np0005536586 podman[165934]: 2025-11-26 12:46:44.573937192 +0000 UTC m=+0.085634688 container init 0648ed0b3aca497afb5c4840d1d5f548b5526262e7596131116bb4b854597449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 07:46:44 np0005536586 podman[165934]: 2025-11-26 12:46:44.57851947 +0000 UTC m=+0.090216965 container start 0648ed0b3aca497afb5c4840d1d5f548b5526262e7596131116bb4b854597449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_volhard, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:46:44 np0005536586 podman[165934]: 2025-11-26 12:46:44.579607941 +0000 UTC m=+0.091305436 container attach 0648ed0b3aca497afb5c4840d1d5f548b5526262e7596131116bb4b854597449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_volhard, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:46:44 np0005536586 musing_volhard[165948]: 167 167
Nov 26 07:46:44 np0005536586 systemd[1]: libpod-0648ed0b3aca497afb5c4840d1d5f548b5526262e7596131116bb4b854597449.scope: Deactivated successfully.
Nov 26 07:46:44 np0005536586 podman[165934]: 2025-11-26 12:46:44.586449546 +0000 UTC m=+0.098147040 container died 0648ed0b3aca497afb5c4840d1d5f548b5526262e7596131116bb4b854597449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 07:46:44 np0005536586 systemd[1]: var-lib-containers-storage-overlay-387896585ec97bef6667ecb61c6c40e2ec793b841ecacbdbb3cfb86bab3252a1-merged.mount: Deactivated successfully.
Nov 26 07:46:44 np0005536586 podman[165934]: 2025-11-26 12:46:44.507871177 +0000 UTC m=+0.019568692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:46:44 np0005536586 podman[165934]: 2025-11-26 12:46:44.60902301 +0000 UTC m=+0.120720504 container remove 0648ed0b3aca497afb5c4840d1d5f548b5526262e7596131116bb4b854597449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 07:46:44 np0005536586 systemd[1]: libpod-conmon-0648ed0b3aca497afb5c4840d1d5f548b5526262e7596131116bb4b854597449.scope: Deactivated successfully.
Nov 26 07:46:44 np0005536586 podman[165969]: 2025-11-26 12:46:44.724804535 +0000 UTC m=+0.026806373 container create a4971b8076dd1630cebacb204755acf69b1bc5a0c274565bba7dbc5457b9e7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_brahmagupta, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 07:46:44 np0005536586 systemd[1]: Started libpod-conmon-a4971b8076dd1630cebacb204755acf69b1bc5a0c274565bba7dbc5457b9e7a4.scope.
Nov 26 07:46:44 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:46:44 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f3e65d06183cd9bf83db731902574003076eb5071db13676452600f2467c16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:46:44 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f3e65d06183cd9bf83db731902574003076eb5071db13676452600f2467c16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:46:44 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f3e65d06183cd9bf83db731902574003076eb5071db13676452600f2467c16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:46:44 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f3e65d06183cd9bf83db731902574003076eb5071db13676452600f2467c16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:46:44 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f3e65d06183cd9bf83db731902574003076eb5071db13676452600f2467c16/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:46:44 np0005536586 podman[165969]: 2025-11-26 12:46:44.779574922 +0000 UTC m=+0.081576769 container init a4971b8076dd1630cebacb204755acf69b1bc5a0c274565bba7dbc5457b9e7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 07:46:44 np0005536586 podman[165969]: 2025-11-26 12:46:44.785158676 +0000 UTC m=+0.087160513 container start a4971b8076dd1630cebacb204755acf69b1bc5a0c274565bba7dbc5457b9e7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_brahmagupta, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:46:44 np0005536586 podman[165969]: 2025-11-26 12:46:44.786211581 +0000 UTC m=+0.088213418 container attach a4971b8076dd1630cebacb204755acf69b1bc5a0c274565bba7dbc5457b9e7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 07:46:44 np0005536586 podman[165969]: 2025-11-26 12:46:44.714394656 +0000 UTC m=+0.016396513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:46:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 07:46:45 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:46:45 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:46:45 np0005536586 lucid_brahmagupta[165982]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:46:45 np0005536586 lucid_brahmagupta[165982]: --> relative data size: 1.0
Nov 26 07:46:45 np0005536586 lucid_brahmagupta[165982]: --> All data devices are unavailable
Nov 26 07:46:45 np0005536586 systemd[1]: libpod-a4971b8076dd1630cebacb204755acf69b1bc5a0c274565bba7dbc5457b9e7a4.scope: Deactivated successfully.
Nov 26 07:46:45 np0005536586 podman[165969]: 2025-11-26 12:46:45.597111278 +0000 UTC m=+0.899113115 container died a4971b8076dd1630cebacb204755acf69b1bc5a0c274565bba7dbc5457b9e7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_brahmagupta, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:46:45 np0005536586 systemd[1]: var-lib-containers-storage-overlay-50f3e65d06183cd9bf83db731902574003076eb5071db13676452600f2467c16-merged.mount: Deactivated successfully.
Nov 26 07:46:45 np0005536586 podman[165969]: 2025-11-26 12:46:45.632871025 +0000 UTC m=+0.934872852 container remove a4971b8076dd1630cebacb204755acf69b1bc5a0c274565bba7dbc5457b9e7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:46:45 np0005536586 systemd[1]: libpod-conmon-a4971b8076dd1630cebacb204755acf69b1bc5a0c274565bba7dbc5457b9e7a4.scope: Deactivated successfully.
Nov 26 07:46:45 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:46:46 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:46 np0005536586 podman[166153]: 2025-11-26 12:46:46.056581427 +0000 UTC m=+0.032128154 container create 78645b97e33d277121db01264187f99bcc7649352c6ff412018be9c75d208288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_snyder, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:46:46 np0005536586 systemd[1]: Started libpod-conmon-78645b97e33d277121db01264187f99bcc7649352c6ff412018be9c75d208288.scope.
Nov 26 07:46:46 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:46:46 np0005536586 podman[166153]: 2025-11-26 12:46:46.106187251 +0000 UTC m=+0.081733989 container init 78645b97e33d277121db01264187f99bcc7649352c6ff412018be9c75d208288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_snyder, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 07:46:46 np0005536586 podman[166153]: 2025-11-26 12:46:46.110496385 +0000 UTC m=+0.086043113 container start 78645b97e33d277121db01264187f99bcc7649352c6ff412018be9c75d208288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:46:46 np0005536586 podman[166153]: 2025-11-26 12:46:46.111854611 +0000 UTC m=+0.087401340 container attach 78645b97e33d277121db01264187f99bcc7649352c6ff412018be9c75d208288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_snyder, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:46:46 np0005536586 admiring_snyder[166166]: 167 167
Nov 26 07:46:46 np0005536586 systemd[1]: libpod-78645b97e33d277121db01264187f99bcc7649352c6ff412018be9c75d208288.scope: Deactivated successfully.
Nov 26 07:46:46 np0005536586 podman[166153]: 2025-11-26 12:46:46.114104648 +0000 UTC m=+0.089651377 container died 78645b97e33d277121db01264187f99bcc7649352c6ff412018be9c75d208288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 07:46:46 np0005536586 systemd[1]: var-lib-containers-storage-overlay-def5b94c6ba9067172ee607c299ab0b35a062ae61f8ada3aeef13de7c7c3d049-merged.mount: Deactivated successfully.
Nov 26 07:46:46 np0005536586 podman[166153]: 2025-11-26 12:46:46.132955102 +0000 UTC m=+0.108501830 container remove 78645b97e33d277121db01264187f99bcc7649352c6ff412018be9c75d208288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:46:46 np0005536586 podman[166153]: 2025-11-26 12:46:46.040992423 +0000 UTC m=+0.016539172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:46:46 np0005536586 systemd[1]: libpod-conmon-78645b97e33d277121db01264187f99bcc7649352c6ff412018be9c75d208288.scope: Deactivated successfully.
Nov 26 07:46:46 np0005536586 podman[166188]: 2025-11-26 12:46:46.247687715 +0000 UTC m=+0.025582521 container create 344faed5c9920e0d712e3acdfb2e926650ae9bb588d3f46860a5e442ffa3094f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wiles, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:46:46 np0005536586 systemd[1]: Started libpod-conmon-344faed5c9920e0d712e3acdfb2e926650ae9bb588d3f46860a5e442ffa3094f.scope.
Nov 26 07:46:46 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:46:46 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06be778efa5a165f4c4efb6aeea6dd6fa3f86545a430c72d2db0e008ae49d7a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:46:46 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06be778efa5a165f4c4efb6aeea6dd6fa3f86545a430c72d2db0e008ae49d7a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:46:46 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06be778efa5a165f4c4efb6aeea6dd6fa3f86545a430c72d2db0e008ae49d7a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:46:46 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06be778efa5a165f4c4efb6aeea6dd6fa3f86545a430c72d2db0e008ae49d7a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:46:46 np0005536586 podman[166188]: 2025-11-26 12:46:46.306834002 +0000 UTC m=+0.084728808 container init 344faed5c9920e0d712e3acdfb2e926650ae9bb588d3f46860a5e442ffa3094f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 07:46:46 np0005536586 podman[166188]: 2025-11-26 12:46:46.311890483 +0000 UTC m=+0.089785290 container start 344faed5c9920e0d712e3acdfb2e926650ae9bb588d3f46860a5e442ffa3094f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wiles, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 07:46:46 np0005536586 podman[166188]: 2025-11-26 12:46:46.312950689 +0000 UTC m=+0.090845495 container attach 344faed5c9920e0d712e3acdfb2e926650ae9bb588d3f46860a5e442ffa3094f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 07:46:46 np0005536586 podman[166188]: 2025-11-26 12:46:46.237714696 +0000 UTC m=+0.015609522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]: {
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:    "0": [
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:        {
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "devices": [
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "/dev/loop3"
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            ],
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "lv_name": "ceph_lv0",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "lv_size": "21470642176",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "name": "ceph_lv0",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "tags": {
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.cluster_name": "ceph",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.crush_device_class": "",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.encrypted": "0",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.osd_id": "0",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.type": "block",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.vdo": "0"
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            },
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "type": "block",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "vg_name": "ceph_vg0"
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:        }
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:    ],
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:    "1": [
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:        {
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "devices": [
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "/dev/loop4"
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            ],
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "lv_name": "ceph_lv1",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "lv_size": "21470642176",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "name": "ceph_lv1",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "tags": {
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.cluster_name": "ceph",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.crush_device_class": "",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.encrypted": "0",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.osd_id": "1",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.type": "block",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.vdo": "0"
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            },
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "type": "block",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "vg_name": "ceph_vg1"
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:        }
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:    ],
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:    "2": [
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:        {
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "devices": [
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "/dev/loop5"
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            ],
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "lv_name": "ceph_lv2",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "lv_size": "21470642176",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "name": "ceph_lv2",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "tags": {
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.cluster_name": "ceph",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.crush_device_class": "",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.encrypted": "0",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.osd_id": "2",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.type": "block",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:                "ceph.vdo": "0"
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            },
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "type": "block",
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:            "vg_name": "ceph_vg2"
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:        }
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]:    ]
Nov 26 07:46:46 np0005536586 peaceful_wiles[166202]: }
Nov 26 07:46:46 np0005536586 systemd[1]: libpod-344faed5c9920e0d712e3acdfb2e926650ae9bb588d3f46860a5e442ffa3094f.scope: Deactivated successfully.
Nov 26 07:46:46 np0005536586 podman[166211]: 2025-11-26 12:46:46.965918594 +0000 UTC m=+0.016859145 container died 344faed5c9920e0d712e3acdfb2e926650ae9bb588d3f46860a5e442ffa3094f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wiles, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 07:46:46 np0005536586 systemd[1]: var-lib-containers-storage-overlay-06be778efa5a165f4c4efb6aeea6dd6fa3f86545a430c72d2db0e008ae49d7a5-merged.mount: Deactivated successfully.
Nov 26 07:46:46 np0005536586 podman[166211]: 2025-11-26 12:46:46.994667857 +0000 UTC m=+0.045608399 container remove 344faed5c9920e0d712e3acdfb2e926650ae9bb588d3f46860a5e442ffa3094f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wiles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 07:46:46 np0005536586 systemd[1]: libpod-conmon-344faed5c9920e0d712e3acdfb2e926650ae9bb588d3f46860a5e442ffa3094f.scope: Deactivated successfully.
Nov 26 07:46:47 np0005536586 podman[166354]: 2025-11-26 12:46:47.423777686 +0000 UTC m=+0.032183500 container create a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_benz, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:46:47 np0005536586 systemd[1]: Started libpod-conmon-a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9.scope.
Nov 26 07:46:47 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:46:47 np0005536586 podman[166354]: 2025-11-26 12:46:47.477470815 +0000 UTC m=+0.085876629 container init a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_benz, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 07:46:47 np0005536586 podman[166354]: 2025-11-26 12:46:47.48181797 +0000 UTC m=+0.090223774 container start a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:46:47 np0005536586 podman[166354]: 2025-11-26 12:46:47.483095245 +0000 UTC m=+0.091501069 container attach a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_benz, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 07:46:47 np0005536586 thirsty_benz[166368]: 167 167
Nov 26 07:46:47 np0005536586 systemd[1]: libpod-a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9.scope: Deactivated successfully.
Nov 26 07:46:47 np0005536586 conmon[166368]: conmon a4e6ed90b27531bcf60c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9.scope/container/memory.events
Nov 26 07:46:47 np0005536586 podman[166354]: 2025-11-26 12:46:47.486991512 +0000 UTC m=+0.095397316 container died a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:46:47 np0005536586 systemd[1]: var-lib-containers-storage-overlay-7f069efd7cf4d45e1b49232e8f02512f46c4f6a8de7676088a506425e1839254-merged.mount: Deactivated successfully.
Nov 26 07:46:47 np0005536586 podman[166354]: 2025-11-26 12:46:47.412948846 +0000 UTC m=+0.021354650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:46:47 np0005536586 podman[166354]: 2025-11-26 12:46:47.513153724 +0000 UTC m=+0.121559528 container remove a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 07:46:47 np0005536586 systemd[1]: libpod-conmon-a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9.scope: Deactivated successfully.
Nov 26 07:46:47 np0005536586 podman[166391]: 2025-11-26 12:46:47.631093724 +0000 UTC m=+0.026795994 container create f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_raman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 07:46:47 np0005536586 systemd[1]: Started libpod-conmon-f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d.scope.
Nov 26 07:46:47 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:46:47 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f0e4377a8c8aa52baec47bdc83f9bbbf831410164b9e40fa2f57a772c829c7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:46:47 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f0e4377a8c8aa52baec47bdc83f9bbbf831410164b9e40fa2f57a772c829c7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:46:47 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f0e4377a8c8aa52baec47bdc83f9bbbf831410164b9e40fa2f57a772c829c7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:46:47 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f0e4377a8c8aa52baec47bdc83f9bbbf831410164b9e40fa2f57a772c829c7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:46:47 np0005536586 podman[166391]: 2025-11-26 12:46:47.688389348 +0000 UTC m=+0.084091607 container init f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_raman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:46:47 np0005536586 podman[166391]: 2025-11-26 12:46:47.69405229 +0000 UTC m=+0.089754550 container start f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_raman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:46:47 np0005536586 podman[166391]: 2025-11-26 12:46:47.695107157 +0000 UTC m=+0.090809417 container attach f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_raman, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:46:47 np0005536586 podman[166391]: 2025-11-26 12:46:47.62053317 +0000 UTC m=+0.016235451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:46:48 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:48 np0005536586 agitated_raman[166405]: {
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:        "osd_id": 1,
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:        "type": "bluestore"
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:    },
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:        "osd_id": 2,
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:        "type": "bluestore"
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:    },
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:        "osd_id": 0,
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:        "type": "bluestore"
Nov 26 07:46:48 np0005536586 agitated_raman[166405]:    }
Nov 26 07:46:48 np0005536586 agitated_raman[166405]: }
Nov 26 07:46:48 np0005536586 systemd[1]: libpod-f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d.scope: Deactivated successfully.
Nov 26 07:46:48 np0005536586 conmon[166405]: conmon f232e78971337a306b01 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d.scope/container/memory.events
Nov 26 07:46:48 np0005536586 podman[166391]: 2025-11-26 12:46:48.459521897 +0000 UTC m=+0.855224158 container died f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_raman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 07:46:48 np0005536586 systemd[1]: var-lib-containers-storage-overlay-3f0e4377a8c8aa52baec47bdc83f9bbbf831410164b9e40fa2f57a772c829c7b-merged.mount: Deactivated successfully.
Nov 26 07:46:48 np0005536586 podman[166391]: 2025-11-26 12:46:48.490237242 +0000 UTC m=+0.885939501 container remove f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 07:46:48 np0005536586 systemd[1]: libpod-conmon-f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d.scope: Deactivated successfully.
Nov 26 07:46:48 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:46:48 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:46:48 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:46:48 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:46:48 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 9a5269a8-f8e8-41a4-87c0-b48cbae4830d does not exist
Nov 26 07:46:48 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev d05bc1dc-cdfd-4109-8e13-17dca0c39cfe does not exist
Nov 26 07:46:49 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:46:49 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:46:49 np0005536586 kernel: SELinux:  Converting 2769 SID table entries...
Nov 26 07:46:49 np0005536586 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 07:46:49 np0005536586 kernel: SELinux:  policy capability open_perms=1
Nov 26 07:46:49 np0005536586 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 07:46:49 np0005536586 kernel: SELinux:  policy capability always_check_network=0
Nov 26 07:46:49 np0005536586 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 07:46:49 np0005536586 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 07:46:49 np0005536586 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 07:46:50 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:46:52 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:54 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:55 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 07:46:55 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2014 writes, 8959 keys, 2014 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2014 writes, 2014 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2014 writes, 8959 keys, 2014 commit groups, 1.0 writes per commit group, ingest: 11.64 MB, 0.02 MB/s#012Interval WAL: 2014 writes, 2014 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    436.4      0.02              0.01         3    0.007       0      0       0.0       0.0#012  L6      1/0    6.54 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    526.7    459.9      0.03              0.02         2    0.015    7174    729       0.0       0.0#012 Sum      1/0    6.54 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    318.6    450.6      0.05              0.04         5    0.010    7174    729       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    325.5    459.3      0.05              0.04         4    0.012    7174    729       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    526.7    459.9      0.03              0.02         2    0.015    7174    729       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    458.4      0.02              0.01         2    0.009       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     48.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.0 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560bd0e9b1f0#2 capacity: 308.00 MB usage: 566.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(35,478.89 KB,0.15184%) FilterBlock(6,28.30 KB,0.00897197%) IndexBlock(6,58.91 KB,0.0186772%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 26 07:46:55 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:46:56 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:46:57 np0005536586 kernel: SELinux:  Converting 2769 SID table entries...
Nov 26 07:46:57 np0005536586 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 07:46:57 np0005536586 kernel: SELinux:  policy capability open_perms=1
Nov 26 07:46:57 np0005536586 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 07:46:57 np0005536586 kernel: SELinux:  policy capability always_check_network=0
Nov 26 07:46:57 np0005536586 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 07:46:57 np0005536586 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 07:46:57 np0005536586 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 07:46:58 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:00 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:00 np0005536586 dbus-broker-launch[767]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 26 07:47:00 np0005536586 podman[166513]: 2025-11-26 12:47:00.876268566 +0000 UTC m=+0.042591145 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 26 07:47:00 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:47:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:47:01.720 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:47:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:47:01.721 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:47:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:47:01.721 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:47:02 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:04 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:47:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:47:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:47:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:47:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:47:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:47:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:47:06 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:08 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:10 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:10 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:47:11 np0005536586 podman[171613]: 2025-11-26 12:47:11.884489345 +0000 UTC m=+0.056480670 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 07:47:12 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:14 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:47:16 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:18 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:20 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:47:22 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:24 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:47:26 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:28 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:30 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:47:31 np0005536586 podman[183350]: 2025-11-26 12:47:31.873675376 +0000 UTC m=+0.042870723 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:47:32 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:34 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:35 np0005536586 kernel: SELinux:  Converting 2770 SID table entries...
Nov 26 07:47:35 np0005536586 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 07:47:35 np0005536586 kernel: SELinux:  policy capability open_perms=1
Nov 26 07:47:35 np0005536586 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 07:47:35 np0005536586 kernel: SELinux:  policy capability always_check_network=0
Nov 26 07:47:35 np0005536586 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 07:47:35 np0005536586 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 07:47:35 np0005536586 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:47:35
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] pools ['default.rgw.control', 'images', 'backups', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.meta']
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:47:35 np0005536586 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Nov 26 07:47:35 np0005536586 dbus-broker-launch[767]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:47:35 np0005536586 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:47:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:47:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:47:36 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:38 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:40 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:47:41 np0005536586 systemd[1]: Stopping OpenSSH server daemon...
Nov 26 07:47:41 np0005536586 systemd[1]: sshd.service: Deactivated successfully.
Nov 26 07:47:41 np0005536586 systemd[1]: Stopped OpenSSH server daemon.
Nov 26 07:47:41 np0005536586 systemd[1]: sshd.service: Consumed 1.402s CPU time, read 32.0K from disk, written 0B to disk.
Nov 26 07:47:41 np0005536586 systemd[1]: Stopped target sshd-keygen.target.
Nov 26 07:47:41 np0005536586 systemd[1]: Stopping sshd-keygen.target...
Nov 26 07:47:41 np0005536586 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 07:47:41 np0005536586 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 07:47:41 np0005536586 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 07:47:41 np0005536586 systemd[1]: Reached target sshd-keygen.target.
Nov 26 07:47:41 np0005536586 systemd[1]: Starting OpenSSH server daemon...
Nov 26 07:47:41 np0005536586 systemd[1]: Started OpenSSH server daemon.
Nov 26 07:47:42 np0005536586 podman[184268]: 2025-11-26 12:47:42.011846491 +0000 UTC m=+0.090976261 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 26 07:47:42 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.305145) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161262305227, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 832, "num_deletes": 251, "total_data_size": 1147783, "memory_usage": 1165120, "flush_reason": "Manual Compaction"}
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161262311374, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1137594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8846, "largest_seqno": 9677, "table_properties": {"data_size": 1133396, "index_size": 1914, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8803, "raw_average_key_size": 18, "raw_value_size": 1125029, "raw_average_value_size": 2378, "num_data_blocks": 89, "num_entries": 473, "num_filter_entries": 473, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764161186, "oldest_key_time": 1764161186, "file_creation_time": 1764161262, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 6269 microseconds, and 5015 cpu microseconds.
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.311423) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1137594 bytes OK
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.311445) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.312009) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.312021) EVENT_LOG_v1 {"time_micros": 1764161262312018, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.312042) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1143676, prev total WAL file size 1143676, number of live WAL files 2.
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.312493) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1110KB)], [23(6697KB)]
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161262312735, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7996092, "oldest_snapshot_seqno": -1}
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3281 keys, 6224719 bytes, temperature: kUnknown
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161262329342, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6224719, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6200728, "index_size": 14666, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79556, "raw_average_key_size": 24, "raw_value_size": 6139326, "raw_average_value_size": 1871, "num_data_blocks": 641, "num_entries": 3281, "num_filter_entries": 3281, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160613, "oldest_key_time": 0, "file_creation_time": 1764161262, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.329639) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6224719 bytes
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.330364) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 481.5 rd, 374.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 6.5 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(12.5) write-amplify(5.5) OK, records in: 3795, records dropped: 514 output_compression: NoCompression
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.330386) EVENT_LOG_v1 {"time_micros": 1764161262330374, "job": 8, "event": "compaction_finished", "compaction_time_micros": 16606, "compaction_time_cpu_micros": 13516, "output_level": 6, "num_output_files": 1, "total_output_size": 6224719, "num_input_records": 3795, "num_output_records": 3281, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161262330987, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161262332414, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.312407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.332547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.332551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.332553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.332554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:47:42 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.332556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:47:43 np0005536586 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 07:47:43 np0005536586 systemd[1]: Starting man-db-cache-update.service...
Nov 26 07:47:43 np0005536586 systemd[1]: Reloading.
Nov 26 07:47:43 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:47:43 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:47:43 np0005536586 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 07:47:44 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:47:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 07:47:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:47:46 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:46 np0005536586 python3.9[187766]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 07:47:47 np0005536586 systemd[1]: Reloading.
Nov 26 07:47:47 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:47:47 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:47:48 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:48 np0005536586 python3.9[189959]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 07:47:48 np0005536586 systemd[1]: Reloading.
Nov 26 07:47:48 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:47:48 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:47:49 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:47:49 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:47:49 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:47:49 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:47:49 np0005536586 python3.9[191259]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 07:47:49 np0005536586 systemd[1]: Reloading.
Nov 26 07:47:49 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:47:49 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:47:49 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:47:49 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:47:50 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:47:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:47:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:47:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:47:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:47:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:47:50 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 98f53cb5-6a97-4e90-bdea-079c62a716f3 does not exist
Nov 26 07:47:50 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev ba583fbc-4e18-4522-b490-2b40a4b3b3c9 does not exist
Nov 26 07:47:50 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev e012392a-f40b-4485-8c89-8151280a9083 does not exist
Nov 26 07:47:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:47:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:47:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:47:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:47:50 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:47:50 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:47:50 np0005536586 python3.9[192706]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 07:47:50 np0005536586 systemd[1]: Reloading.
Nov 26 07:47:50 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:47:50 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:47:50 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:47:50 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:47:50 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:47:50 np0005536586 podman[193895]: 2025-11-26 12:47:50.88688434 +0000 UTC m=+0.033092552 container create 0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldberg, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:47:50 np0005536586 systemd[1]: Started libpod-conmon-0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360.scope.
Nov 26 07:47:50 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:47:50 np0005536586 podman[193895]: 2025-11-26 12:47:50.967983944 +0000 UTC m=+0.114192156 container init 0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:47:50 np0005536586 podman[193895]: 2025-11-26 12:47:50.87200299 +0000 UTC m=+0.018211223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:47:50 np0005536586 podman[193895]: 2025-11-26 12:47:50.975625563 +0000 UTC m=+0.121833764 container start 0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 07:47:50 np0005536586 podman[193895]: 2025-11-26 12:47:50.977895196 +0000 UTC m=+0.124103409 container attach 0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldberg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:47:50 np0005536586 competent_goldberg[194043]: 167 167
Nov 26 07:47:50 np0005536586 systemd[1]: libpod-0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360.scope: Deactivated successfully.
Nov 26 07:47:50 np0005536586 conmon[194043]: conmon 0e52e371ebbaa977f695 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360.scope/container/memory.events
Nov 26 07:47:50 np0005536586 podman[193895]: 2025-11-26 12:47:50.985962195 +0000 UTC m=+0.132170408 container died 0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 07:47:51 np0005536586 systemd[1]: var-lib-containers-storage-overlay-3f84b50d3563a09edc3b9ffe849ccd27a0f72163b1e6efc30dc28ca36ea3821b-merged.mount: Deactivated successfully.
Nov 26 07:47:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:47:51 np0005536586 podman[193895]: 2025-11-26 12:47:51.021179371 +0000 UTC m=+0.167387583 container remove 0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:47:51 np0005536586 systemd[1]: libpod-conmon-0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360.scope: Deactivated successfully.
Nov 26 07:47:51 np0005536586 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 07:47:51 np0005536586 systemd[1]: Finished man-db-cache-update.service.
Nov 26 07:47:51 np0005536586 systemd[1]: man-db-cache-update.service: Consumed 9.454s CPU time.
Nov 26 07:47:51 np0005536586 systemd[1]: run-r9038aa58a0774d5692cf4c3984359af4.service: Deactivated successfully.
Nov 26 07:47:51 np0005536586 podman[194214]: 2025-11-26 12:47:51.175445026 +0000 UTC m=+0.035005993 container create b25e0b8dbe5d78225db13778dd0f02210842df865a7f91c347bf79b37b2384bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ride, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:47:51 np0005536586 systemd[1]: Started libpod-conmon-b25e0b8dbe5d78225db13778dd0f02210842df865a7f91c347bf79b37b2384bc.scope.
Nov 26 07:47:51 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:47:51 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3758f63698c7e74aeaaa290efbec3617a8e83da642439f6e9674e28d4d0dfe16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:47:51 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3758f63698c7e74aeaaa290efbec3617a8e83da642439f6e9674e28d4d0dfe16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:47:51 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3758f63698c7e74aeaaa290efbec3617a8e83da642439f6e9674e28d4d0dfe16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:47:51 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3758f63698c7e74aeaaa290efbec3617a8e83da642439f6e9674e28d4d0dfe16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:47:51 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3758f63698c7e74aeaaa290efbec3617a8e83da642439f6e9674e28d4d0dfe16/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:47:51 np0005536586 podman[194214]: 2025-11-26 12:47:51.257319044 +0000 UTC m=+0.116880020 container init b25e0b8dbe5d78225db13778dd0f02210842df865a7f91c347bf79b37b2384bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:47:51 np0005536586 podman[194214]: 2025-11-26 12:47:51.162168765 +0000 UTC m=+0.021729752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:47:51 np0005536586 podman[194214]: 2025-11-26 12:47:51.266603709 +0000 UTC m=+0.126164676 container start b25e0b8dbe5d78225db13778dd0f02210842df865a7f91c347bf79b37b2384bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ride, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 07:47:51 np0005536586 podman[194214]: 2025-11-26 12:47:51.268360541 +0000 UTC m=+0.127921528 container attach b25e0b8dbe5d78225db13778dd0f02210842df865a7f91c347bf79b37b2384bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:47:51 np0005536586 python3.9[194186]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:47:51 np0005536586 systemd[1]: Reloading.
Nov 26 07:47:51 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:47:51 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:47:52 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:52 np0005536586 quizzical_ride[194227]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:47:52 np0005536586 quizzical_ride[194227]: --> relative data size: 1.0
Nov 26 07:47:52 np0005536586 quizzical_ride[194227]: --> All data devices are unavailable
Nov 26 07:47:52 np0005536586 systemd[1]: libpod-b25e0b8dbe5d78225db13778dd0f02210842df865a7f91c347bf79b37b2384bc.scope: Deactivated successfully.
Nov 26 07:47:52 np0005536586 podman[194446]: 2025-11-26 12:47:52.229641545 +0000 UTC m=+0.030133426 container died b25e0b8dbe5d78225db13778dd0f02210842df865a7f91c347bf79b37b2384bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:47:52 np0005536586 systemd[1]: var-lib-containers-storage-overlay-3758f63698c7e74aeaaa290efbec3617a8e83da642439f6e9674e28d4d0dfe16-merged.mount: Deactivated successfully.
Nov 26 07:47:52 np0005536586 podman[194446]: 2025-11-26 12:47:52.265478153 +0000 UTC m=+0.065970034 container remove b25e0b8dbe5d78225db13778dd0f02210842df865a7f91c347bf79b37b2384bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:47:52 np0005536586 systemd[1]: libpod-conmon-b25e0b8dbe5d78225db13778dd0f02210842df865a7f91c347bf79b37b2384bc.scope: Deactivated successfully.
Nov 26 07:47:52 np0005536586 python3.9[194435]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:47:52 np0005536586 systemd[1]: Reloading.
Nov 26 07:47:52 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:47:52 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:47:53 np0005536586 podman[194781]: 2025-11-26 12:47:53.015586165 +0000 UTC m=+0.034481244 container create 6e074b7e5a315fa289432e34a6bb9302b79434661f190bf1b7e2a6081e8ab7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ride, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 07:47:53 np0005536586 systemd[1]: Started libpod-conmon-6e074b7e5a315fa289432e34a6bb9302b79434661f190bf1b7e2a6081e8ab7af.scope.
Nov 26 07:47:53 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:47:53 np0005536586 podman[194781]: 2025-11-26 12:47:53.093569607 +0000 UTC m=+0.112464705 container init 6e074b7e5a315fa289432e34a6bb9302b79434661f190bf1b7e2a6081e8ab7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 07:47:53 np0005536586 podman[194781]: 2025-11-26 12:47:52.999349161 +0000 UTC m=+0.018244260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:47:53 np0005536586 podman[194781]: 2025-11-26 12:47:53.100205849 +0000 UTC m=+0.119100917 container start 6e074b7e5a315fa289432e34a6bb9302b79434661f190bf1b7e2a6081e8ab7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:47:53 np0005536586 podman[194781]: 2025-11-26 12:47:53.10230654 +0000 UTC m=+0.121201639 container attach 6e074b7e5a315fa289432e34a6bb9302b79434661f190bf1b7e2a6081e8ab7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:47:53 np0005536586 eager_ride[194795]: 167 167
Nov 26 07:47:53 np0005536586 systemd[1]: libpod-6e074b7e5a315fa289432e34a6bb9302b79434661f190bf1b7e2a6081e8ab7af.scope: Deactivated successfully.
Nov 26 07:47:53 np0005536586 podman[194781]: 2025-11-26 12:47:53.105491405 +0000 UTC m=+0.124386483 container died 6e074b7e5a315fa289432e34a6bb9302b79434661f190bf1b7e2a6081e8ab7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 07:47:53 np0005536586 systemd[1]: var-lib-containers-storage-overlay-88e913ede45ccb30d0cae1267fd8d199c6ff7a0fcd5b019723e71dd27b11c19c-merged.mount: Deactivated successfully.
Nov 26 07:47:53 np0005536586 podman[194781]: 2025-11-26 12:47:53.130280753 +0000 UTC m=+0.149175832 container remove 6e074b7e5a315fa289432e34a6bb9302b79434661f190bf1b7e2a6081e8ab7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ride, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 07:47:53 np0005536586 systemd[1]: libpod-conmon-6e074b7e5a315fa289432e34a6bb9302b79434661f190bf1b7e2a6081e8ab7af.scope: Deactivated successfully.
Nov 26 07:47:53 np0005536586 python3.9[194779]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:47:53 np0005536586 podman[194817]: 2025-11-26 12:47:53.269490342 +0000 UTC m=+0.035146948 container create ba489a161fd716ed594fe68f1655483737b297dea734e383338ecf625faf5682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:47:53 np0005536586 systemd[1]: Started libpod-conmon-ba489a161fd716ed594fe68f1655483737b297dea734e383338ecf625faf5682.scope.
Nov 26 07:47:53 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:47:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4981feae16d86dedf11a6a69e50f4ababa7c65e64e69bef4a1183d6c70654b57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:47:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4981feae16d86dedf11a6a69e50f4ababa7c65e64e69bef4a1183d6c70654b57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:47:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4981feae16d86dedf11a6a69e50f4ababa7c65e64e69bef4a1183d6c70654b57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:47:53 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4981feae16d86dedf11a6a69e50f4ababa7c65e64e69bef4a1183d6c70654b57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:47:53 np0005536586 podman[194817]: 2025-11-26 12:47:53.345105729 +0000 UTC m=+0.110762335 container init ba489a161fd716ed594fe68f1655483737b297dea734e383338ecf625faf5682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 07:47:53 np0005536586 podman[194817]: 2025-11-26 12:47:53.353260324 +0000 UTC m=+0.118916940 container start ba489a161fd716ed594fe68f1655483737b297dea734e383338ecf625faf5682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_darwin, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 07:47:53 np0005536586 podman[194817]: 2025-11-26 12:47:53.354472219 +0000 UTC m=+0.120128835 container attach ba489a161fd716ed594fe68f1655483737b297dea734e383338ecf625faf5682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_darwin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:47:53 np0005536586 systemd[1]: Reloading.
Nov 26 07:47:53 np0005536586 podman[194817]: 2025-11-26 12:47:53.254766621 +0000 UTC m=+0.020423247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:47:53 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:47:53 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]: {
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:    "0": [
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:        {
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "devices": [
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "/dev/loop3"
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            ],
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "lv_name": "ceph_lv0",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "lv_size": "21470642176",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "name": "ceph_lv0",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "tags": {
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.cluster_name": "ceph",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.crush_device_class": "",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.encrypted": "0",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.osd_id": "0",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.type": "block",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.vdo": "0"
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            },
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "type": "block",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "vg_name": "ceph_vg0"
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:        }
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:    ],
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:    "1": [
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:        {
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "devices": [
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "/dev/loop4"
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            ],
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "lv_name": "ceph_lv1",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "lv_size": "21470642176",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "name": "ceph_lv1",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "tags": {
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.cluster_name": "ceph",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.crush_device_class": "",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.encrypted": "0",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.osd_id": "1",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.type": "block",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.vdo": "0"
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            },
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "type": "block",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "vg_name": "ceph_vg1"
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:        }
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:    ],
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:    "2": [
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:        {
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "devices": [
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "/dev/loop5"
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            ],
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "lv_name": "ceph_lv2",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "lv_size": "21470642176",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "name": "ceph_lv2",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "tags": {
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.cluster_name": "ceph",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.crush_device_class": "",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.encrypted": "0",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.osd_id": "2",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.type": "block",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:                "ceph.vdo": "0"
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            },
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "type": "block",
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:            "vg_name": "ceph_vg2"
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:        }
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]:    ]
Nov 26 07:47:54 np0005536586 wonderful_darwin[194832]: }
Nov 26 07:47:54 np0005536586 systemd[1]: libpod-ba489a161fd716ed594fe68f1655483737b297dea734e383338ecf625faf5682.scope: Deactivated successfully.
Nov 26 07:47:54 np0005536586 podman[194817]: 2025-11-26 12:47:54.059097897 +0000 UTC m=+0.824754504 container died ba489a161fd716ed594fe68f1655483737b297dea734e383338ecf625faf5682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_darwin, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:47:54 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:54 np0005536586 systemd[1]: var-lib-containers-storage-overlay-4981feae16d86dedf11a6a69e50f4ababa7c65e64e69bef4a1183d6c70654b57-merged.mount: Deactivated successfully.
Nov 26 07:47:54 np0005536586 podman[194817]: 2025-11-26 12:47:54.103419253 +0000 UTC m=+0.869075859 container remove ba489a161fd716ed594fe68f1655483737b297dea734e383338ecf625faf5682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_darwin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 07:47:54 np0005536586 systemd[1]: libpod-conmon-ba489a161fd716ed594fe68f1655483737b297dea734e383338ecf625faf5682.scope: Deactivated successfully.
Nov 26 07:47:54 np0005536586 python3.9[195028]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:47:54 np0005536586 podman[195294]: 2025-11-26 12:47:54.674547923 +0000 UTC m=+0.045243464 container create 363cbd8c6f593d7ac4a44f9b431a23d547e87b9357e5ce71bb4c112464c6efd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:47:54 np0005536586 systemd[1]: Started libpod-conmon-363cbd8c6f593d7ac4a44f9b431a23d547e87b9357e5ce71bb4c112464c6efd9.scope.
Nov 26 07:47:54 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:47:54 np0005536586 podman[195294]: 2025-11-26 12:47:54.73273528 +0000 UTC m=+0.103430841 container init 363cbd8c6f593d7ac4a44f9b431a23d547e87b9357e5ce71bb4c112464c6efd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_tharp, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:47:54 np0005536586 podman[195294]: 2025-11-26 12:47:54.741848583 +0000 UTC m=+0.112544134 container start 363cbd8c6f593d7ac4a44f9b431a23d547e87b9357e5ce71bb4c112464c6efd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_tharp, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:47:54 np0005536586 podman[195294]: 2025-11-26 12:47:54.743627868 +0000 UTC m=+0.114323408 container attach 363cbd8c6f593d7ac4a44f9b431a23d547e87b9357e5ce71bb4c112464c6efd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_tharp, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:47:54 np0005536586 infallible_tharp[195337]: 167 167
Nov 26 07:47:54 np0005536586 systemd[1]: libpod-363cbd8c6f593d7ac4a44f9b431a23d547e87b9357e5ce71bb4c112464c6efd9.scope: Deactivated successfully.
Nov 26 07:47:54 np0005536586 podman[195294]: 2025-11-26 12:47:54.746364547 +0000 UTC m=+0.117060088 container died 363cbd8c6f593d7ac4a44f9b431a23d547e87b9357e5ce71bb4c112464c6efd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_tharp, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:47:54 np0005536586 podman[195294]: 2025-11-26 12:47:54.657082904 +0000 UTC m=+0.027778445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:47:54 np0005536586 systemd[1]: var-lib-containers-storage-overlay-1e44fc9266bc7ec6163792b238907d1ce38793ae1e2b1161bb7f7f63fe324158-merged.mount: Deactivated successfully.
Nov 26 07:47:54 np0005536586 podman[195294]: 2025-11-26 12:47:54.768703596 +0000 UTC m=+0.139399137 container remove 363cbd8c6f593d7ac4a44f9b431a23d547e87b9357e5ce71bb4c112464c6efd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_tharp, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:47:54 np0005536586 systemd[1]: libpod-conmon-363cbd8c6f593d7ac4a44f9b431a23d547e87b9357e5ce71bb4c112464c6efd9.scope: Deactivated successfully.
Nov 26 07:47:54 np0005536586 podman[195359]: 2025-11-26 12:47:54.919661727 +0000 UTC m=+0.037855244 container create 3ab7d33452a306818f2974ce3f4871b6a3f6dacc07c6e9d61015faba5775bfd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 07:47:54 np0005536586 python3.9[195336]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:47:54 np0005536586 systemd[1]: Started libpod-conmon-3ab7d33452a306818f2974ce3f4871b6a3f6dacc07c6e9d61015faba5775bfd1.scope.
Nov 26 07:47:55 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:47:55 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e06cc61bf2c272855563ff0de0e225132f83d6901d97b2f2fd11c5fc558f0c16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:47:55 np0005536586 podman[195359]: 2025-11-26 12:47:54.904922416 +0000 UTC m=+0.023115954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:47:55 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e06cc61bf2c272855563ff0de0e225132f83d6901d97b2f2fd11c5fc558f0c16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:47:55 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e06cc61bf2c272855563ff0de0e225132f83d6901d97b2f2fd11c5fc558f0c16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:47:55 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e06cc61bf2c272855563ff0de0e225132f83d6901d97b2f2fd11c5fc558f0c16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:47:55 np0005536586 podman[195359]: 2025-11-26 12:47:55.018353652 +0000 UTC m=+0.136547170 container init 3ab7d33452a306818f2974ce3f4871b6a3f6dacc07c6e9d61015faba5775bfd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:47:55 np0005536586 podman[195359]: 2025-11-26 12:47:55.025335607 +0000 UTC m=+0.143529124 container start 3ab7d33452a306818f2974ce3f4871b6a3f6dacc07c6e9d61015faba5775bfd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:47:55 np0005536586 podman[195359]: 2025-11-26 12:47:55.029804302 +0000 UTC m=+0.147997820 container attach 3ab7d33452a306818f2974ce3f4871b6a3f6dacc07c6e9d61015faba5775bfd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 07:47:55 np0005536586 systemd[1]: Reloading.
Nov 26 07:47:55 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:47:55 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]: {
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:        "osd_id": 1,
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:        "type": "bluestore"
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:    },
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:        "osd_id": 2,
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:        "type": "bluestore"
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:    },
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:        "osd_id": 0,
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:        "type": "bluestore"
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]:    }
Nov 26 07:47:55 np0005536586 intelligent_maxwell[195373]: }
Nov 26 07:47:55 np0005536586 systemd[1]: libpod-3ab7d33452a306818f2974ce3f4871b6a3f6dacc07c6e9d61015faba5775bfd1.scope: Deactivated successfully.
Nov 26 07:47:55 np0005536586 python3.9[195571]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 07:47:55 np0005536586 podman[195595]: 2025-11-26 12:47:55.945516829 +0000 UTC m=+0.026849414 container died 3ab7d33452a306818f2974ce3f4871b6a3f6dacc07c6e9d61015faba5775bfd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:47:55 np0005536586 systemd[1]: var-lib-containers-storage-overlay-e06cc61bf2c272855563ff0de0e225132f83d6901d97b2f2fd11c5fc558f0c16-merged.mount: Deactivated successfully.
Nov 26 07:47:55 np0005536586 podman[195595]: 2025-11-26 12:47:55.989031683 +0000 UTC m=+0.070364248 container remove 3ab7d33452a306818f2974ce3f4871b6a3f6dacc07c6e9d61015faba5775bfd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 07:47:55 np0005536586 systemd[1]: Reloading.
Nov 26 07:47:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:47:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:47:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:47:56 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:47:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:47:56 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev cf3880a9-3aff-4caa-a930-29ff21a1e619 does not exist
Nov 26 07:47:56 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev b5694425-2467-49cd-becd-635f578ef9bd does not exist
Nov 26 07:47:56 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:47:56 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:47:56 np0005536586 systemd[1]: libpod-conmon-3ab7d33452a306818f2974ce3f4871b6a3f6dacc07c6e9d61015faba5775bfd1.scope: Deactivated successfully.
Nov 26 07:47:56 np0005536586 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 26 07:47:56 np0005536586 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 26 07:47:56 np0005536586 python3.9[195848]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:47:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:47:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:47:57 np0005536586 python3.9[196003]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:47:58 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:47:59 np0005536586 python3.9[196158]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:47:59 np0005536586 python3.9[196313]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:48:00 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:00 np0005536586 python3.9[196468]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:48:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:48:01 np0005536586 python3.9[196623]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:48:01 np0005536586 python3.9[196778]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:48:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:48:01.722 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:48:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:48:01.723 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:48:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:48:01.723 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:48:02 np0005536586 podman[196905]: 2025-11-26 12:48:02.054974894 +0000 UTC m=+0.048004119 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 26 07:48:02 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:02 np0005536586 python3.9[196948]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:48:02 np0005536586 python3.9[197105]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:48:03 np0005536586 python3.9[197260]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:48:04 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:04 np0005536586 python3.9[197415]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:48:04 np0005536586 python3.9[197570]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:48:05 np0005536586 python3.9[197725]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:48:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:48:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:48:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:48:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:48:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:48:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:48:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:48:06 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:06 np0005536586 python3.9[197880]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 07:48:06 np0005536586 python3.9[198035]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:48:07 np0005536586 python3.9[198187]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:48:07 np0005536586 python3.9[198339]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:48:08 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:08 np0005536586 python3.9[198491]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:48:08 np0005536586 python3.9[198643]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:48:08 np0005536586 python3.9[198795]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:48:09 np0005536586 python3.9[198947]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:10 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:10 np0005536586 python3.9[199072]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764161289.1314797-554-42520484126253/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:10 np0005536586 auditd[670]: Audit daemon rotating log files
Nov 26 07:48:10 np0005536586 python3.9[199224]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:10 np0005536586 python3.9[199349]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764161290.2104938-554-219758987890700/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:48:11 np0005536586 python3.9[199501]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:11 np0005536586 python3.9[199626]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764161291.0333848-554-268677477425835/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:12 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:12 np0005536586 podman[199778]: 2025-11-26 12:48:12.137252043 +0000 UTC m=+0.068872164 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 07:48:12 np0005536586 python3.9[199779]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:12 np0005536586 python3.9[199926]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764161291.8748186-554-270317394101025/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:13 np0005536586 python3.9[200078]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:13 np0005536586 python3.9[200203]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764161292.726138-554-1643060450308/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:13 np0005536586 python3.9[200355]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:14 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:14 np0005536586 python3.9[200480]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764161293.582661-554-146285844496140/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:14 np0005536586 python3.9[200632]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:15 np0005536586 python3.9[200755]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764161294.4155183-554-216187874434389/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:15 np0005536586 python3.9[200907]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:15 np0005536586 python3.9[201032]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764161295.231182-554-74235149052785/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:48:16 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:16 np0005536586 python3.9[201184]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 26 07:48:16 np0005536586 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 07:48:16 np0005536586 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5487 writes, 23K keys, 5487 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5487 writes, 835 syncs, 6.57 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5487 writes, 23K keys, 5487 commit groups, 1.0 writes per commit group, ingest: 18.42 MB, 0.03 MB/s#012Interval WAL: 5487 writes, 835 syncs, 6.57 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowd
Nov 26 07:48:16 np0005536586 python3.9[201337]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:17 np0005536586 python3.9[201489]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:17 np0005536586 python3.9[201641]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:18 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:18 np0005536586 python3.9[201793]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:18 np0005536586 python3.9[201945]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:19 np0005536586 python3.9[202097]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:19 np0005536586 python3.9[202249]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:20 np0005536586 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 07:48:20 np0005536586 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6699 writes, 27K keys, 6699 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6699 writes, 1243 syncs, 5.39 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6699 writes, 27K keys, 6699 commit groups, 1.0 writes per commit group, ingest: 19.36 MB, 0.03 MB/s#012Interval WAL: 6699 writes, 1243 syncs, 5.39 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.042       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.042       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.042       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slo
Nov 26 07:48:20 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:20 np0005536586 python3.9[202401]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:20 np0005536586 python3.9[202553]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:48:21 np0005536586 python3.9[202705]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:21 np0005536586 python3.9[202857]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:22 np0005536586 python3.9[203009]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:22 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:22 np0005536586 python3.9[203161]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:22 np0005536586 python3.9[203313]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:23 np0005536586 python3.9[203465]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:23 np0005536586 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 07:48:23 np0005536586 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 5527 writes, 23K keys, 5527 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5527 writes, 849 syncs, 6.51 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5527 writes, 23K keys, 5527 commit groups, 1.0 writes per commit group, ingest: 18.26 MB, 0.03 MB/s#012Interval WAL: 5527 writes, 849 syncs, 6.51 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 26 07:48:23 np0005536586 python3.9[203588]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161303.1603756-775-154261401173795/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:24 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:24 np0005536586 python3.9[203740]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:24 np0005536586 ceph-mgr[75236]: [devicehealth INFO root] Check health
Nov 26 07:48:24 np0005536586 python3.9[203863]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161304.0797348-775-189164635029203/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:25 np0005536586 python3.9[204015]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:25 np0005536586 python3.9[204138]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161304.988628-775-215286767773363/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:48:26 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:26 np0005536586 python3.9[204290]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:26 np0005536586 python3.9[204413]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161305.9421377-775-17702914241174/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:27 np0005536586 python3.9[204565]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:27 np0005536586 python3.9[204688]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161306.8401961-775-164531440223944/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:28 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:28 np0005536586 python3.9[204840]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:28 np0005536586 python3.9[204963]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161307.7694519-775-10808284215559/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:29 np0005536586 python3.9[205115]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:29 np0005536586 python3.9[205238]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161308.7312834-775-176864826569364/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:30 np0005536586 python3.9[205390]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:30 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:30 np0005536586 python3.9[205513]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161309.6708312-775-228413769703142/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:30 np0005536586 python3.9[205665]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:48:31 np0005536586 python3.9[205788]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161310.6079724-775-86166847146376/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:31 np0005536586 python3.9[205940]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:32 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:32 np0005536586 podman[206035]: 2025-11-26 12:48:32.166363632 +0000 UTC m=+0.056710996 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 26 07:48:32 np0005536586 python3.9[206078]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161311.501768-775-187592748376917/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:32 np0005536586 python3.9[206232]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:33 np0005536586 python3.9[206355]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161312.4587128-775-205140732168498/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:33 np0005536586 python3.9[206507]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:34 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:34 np0005536586 python3.9[206630]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161313.4011762-775-111106744894191/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:34 np0005536586 python3.9[206782]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:35 np0005536586 python3.9[206905]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161314.3266828-775-43726961421276/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:35 np0005536586 python3.9[207057]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:48:35
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'default.rgw.log', 'backups', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'volumes', 'vms']
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:48:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:48:35 np0005536586 python3.9[207180]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161315.1902616-775-121160419239618/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:48:36 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:36 np0005536586 python3.9[207330]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:48:37 np0005536586 python3.9[207485]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 26 07:48:38 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:38 np0005536586 dbus-broker-launch[767]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 26 07:48:38 np0005536586 python3.9[207641]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:38 np0005536586 python3.9[207793]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:39 np0005536586 python3.9[207945]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:39 np0005536586 python3.9[208097]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:40 np0005536586 python3.9[208249]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:40 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:40 np0005536586 python3.9[208401]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:40 np0005536586 python3.9[208553]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:48:41 np0005536586 python3.9[208705]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:41 np0005536586 python3.9[208857]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:42 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:42 np0005536586 python3.9[209009]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:42 np0005536586 podman[209010]: 2025-11-26 12:48:42.374626667 +0000 UTC m=+0.064309701 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 26 07:48:42 np0005536586 python3.9[209185]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:48:42 np0005536586 systemd[1]: Reloading.
Nov 26 07:48:42 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:48:42 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:48:43 np0005536586 systemd[1]: Starting libvirt logging daemon socket...
Nov 26 07:48:43 np0005536586 systemd[1]: Listening on libvirt logging daemon socket.
Nov 26 07:48:43 np0005536586 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 26 07:48:43 np0005536586 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 26 07:48:43 np0005536586 systemd[1]: Starting libvirt logging daemon...
Nov 26 07:48:43 np0005536586 systemd[1]: Started libvirt logging daemon.
Nov 26 07:48:43 np0005536586 python3.9[209378]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:48:43 np0005536586 systemd[1]: Reloading.
Nov 26 07:48:43 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:48:43 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:48:44 np0005536586 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 26 07:48:44 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:44 np0005536586 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 26 07:48:44 np0005536586 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 26 07:48:44 np0005536586 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 26 07:48:44 np0005536586 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 26 07:48:44 np0005536586 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 26 07:48:44 np0005536586 systemd[1]: Starting libvirt nodedev daemon...
Nov 26 07:48:44 np0005536586 systemd[1]: Started libvirt nodedev daemon.
Nov 26 07:48:44 np0005536586 python3.9[209594]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:48:44 np0005536586 systemd[1]: Reloading.
Nov 26 07:48:44 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:48:44 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:48:44 np0005536586 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 26 07:48:45 np0005536586 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 26 07:48:45 np0005536586 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 26 07:48:45 np0005536586 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 26 07:48:45 np0005536586 systemd[1]: Starting libvirt proxy daemon...
Nov 26 07:48:45 np0005536586 systemd[1]: Started libvirt proxy daemon.
Nov 26 07:48:45 np0005536586 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:48:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 07:48:45 np0005536586 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 26 07:48:45 np0005536586 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 26 07:48:45 np0005536586 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 26 07:48:45 np0005536586 python3.9[209806]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:48:45 np0005536586 systemd[1]: Reloading.
Nov 26 07:48:45 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:48:45 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:48:45 np0005536586 systemd[1]: Listening on libvirt locking daemon socket.
Nov 26 07:48:45 np0005536586 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 26 07:48:45 np0005536586 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 26 07:48:45 np0005536586 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 26 07:48:45 np0005536586 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 26 07:48:45 np0005536586 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 26 07:48:45 np0005536586 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 26 07:48:45 np0005536586 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 26 07:48:45 np0005536586 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 26 07:48:45 np0005536586 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 26 07:48:45 np0005536586 systemd[1]: Starting libvirt QEMU daemon...
Nov 26 07:48:45 np0005536586 systemd[1]: Started libvirt QEMU daemon.
Nov 26 07:48:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:48:46 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:46 np0005536586 setroubleshoot[209654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l e216a4c2-173d-4083-97df-b5be5c9efb29
Nov 26 07:48:46 np0005536586 setroubleshoot[209654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 26 07:48:46 np0005536586 setroubleshoot[209654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l e216a4c2-173d-4083-97df-b5be5c9efb29
Nov 26 07:48:46 np0005536586 setroubleshoot[209654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 26 07:48:46 np0005536586 python3.9[210031]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:48:46 np0005536586 systemd[1]: Reloading.
Nov 26 07:48:46 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:48:46 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:48:46 np0005536586 systemd[1]: Starting libvirt secret daemon socket...
Nov 26 07:48:46 np0005536586 systemd[1]: Listening on libvirt secret daemon socket.
Nov 26 07:48:46 np0005536586 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 26 07:48:46 np0005536586 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 26 07:48:46 np0005536586 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 26 07:48:46 np0005536586 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 26 07:48:46 np0005536586 systemd[1]: Starting libvirt secret daemon...
Nov 26 07:48:46 np0005536586 systemd[1]: Started libvirt secret daemon.
Nov 26 07:48:47 np0005536586 python3.9[210243]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:48 np0005536586 python3.9[210395]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 07:48:48 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:48 np0005536586 python3.9[210547]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:48:49 np0005536586 python3.9[210701]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 07:48:49 np0005536586 python3.9[210851]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:50 np0005536586 python3.9[210972]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161329.3506982-1133-267316216175564/.source.xml follow=False _original_basename=secret.xml.j2 checksum=0c169d2ad7f41d18088a4831ca21879b2a114042 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:50 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:50 np0005536586 python3.9[211124]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine f7d7fe93-41e5-51c4-b72d-63b38686102e#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:48:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:48:51 np0005536586 python3.9[211286]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:52 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:52 np0005536586 python3.9[211749]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:53 np0005536586 python3.9[211901]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:53 np0005536586 python3.9[212024]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161332.7046738-1188-214669036490072/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:54 np0005536586 python3.9[212176]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:54 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:54 np0005536586 python3.9[212328]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:54 np0005536586 python3.9[212406]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:55 np0005536586 python3.9[212558]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:55 np0005536586 python3.9[212636]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.4diplglj recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:56 np0005536586 python3.9[212788]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:48:56 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:56 np0005536586 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 26 07:48:56 np0005536586 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 26 07:48:56 np0005536586 python3.9[212866]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:56 np0005536586 python3.9[213130]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:48:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 26 07:48:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 07:48:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:48:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:48:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:48:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:48:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:48:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:48:56 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev d1aeefb3-ebf1-4310-9845-f5ea452a17d6 does not exist
Nov 26 07:48:56 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 793b7e18-ccf4-47d3-bcb6-dcacc4842087 does not exist
Nov 26 07:48:56 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev ffe4d145-daa7-4806-b1a8-3e1f66baa0a0 does not exist
Nov 26 07:48:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:48:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:48:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:48:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:48:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:48:56 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:48:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 07:48:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:48:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:48:57 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:48:57 np0005536586 podman[213430]: 2025-11-26 12:48:57.353270379 +0000 UTC m=+0.040120536 container create e180de435edf3e19182b54c1665a3bda944550aed0dc2ba43bbd50d8b205c8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 07:48:57 np0005536586 systemd[1]: Started libpod-conmon-e180de435edf3e19182b54c1665a3bda944550aed0dc2ba43bbd50d8b205c8b5.scope.
Nov 26 07:48:57 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:48:57 np0005536586 podman[213430]: 2025-11-26 12:48:57.416337031 +0000 UTC m=+0.103187197 container init e180de435edf3e19182b54c1665a3bda944550aed0dc2ba43bbd50d8b205c8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 26 07:48:57 np0005536586 podman[213430]: 2025-11-26 12:48:57.422582915 +0000 UTC m=+0.109433071 container start e180de435edf3e19182b54c1665a3bda944550aed0dc2ba43bbd50d8b205c8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:48:57 np0005536586 podman[213430]: 2025-11-26 12:48:57.423724004 +0000 UTC m=+0.110574160 container attach e180de435edf3e19182b54c1665a3bda944550aed0dc2ba43bbd50d8b205c8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 07:48:57 np0005536586 suspicious_beaver[213446]: 167 167
Nov 26 07:48:57 np0005536586 systemd[1]: libpod-e180de435edf3e19182b54c1665a3bda944550aed0dc2ba43bbd50d8b205c8b5.scope: Deactivated successfully.
Nov 26 07:48:57 np0005536586 podman[213430]: 2025-11-26 12:48:57.427063864 +0000 UTC m=+0.113914019 container died e180de435edf3e19182b54c1665a3bda944550aed0dc2ba43bbd50d8b205c8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 07:48:57 np0005536586 podman[213430]: 2025-11-26 12:48:57.339689671 +0000 UTC m=+0.026539846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:48:57 np0005536586 systemd[1]: var-lib-containers-storage-overlay-5355abdd380a8973eb7ad5306b20b108757f3b6597eb97a22b16a317899e75c2-merged.mount: Deactivated successfully.
Nov 26 07:48:57 np0005536586 podman[213430]: 2025-11-26 12:48:57.449659742 +0000 UTC m=+0.136509898 container remove e180de435edf3e19182b54c1665a3bda944550aed0dc2ba43bbd50d8b205c8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:48:57 np0005536586 systemd[1]: libpod-conmon-e180de435edf3e19182b54c1665a3bda944550aed0dc2ba43bbd50d8b205c8b5.scope: Deactivated successfully.
Nov 26 07:48:57 np0005536586 python3[213440]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 07:48:57 np0005536586 podman[213467]: 2025-11-26 12:48:57.584797555 +0000 UTC m=+0.034276499 container create 78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_zhukovsky, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 07:48:57 np0005536586 systemd[1]: Started libpod-conmon-78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b.scope.
Nov 26 07:48:57 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:48:57 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb53554b1fc749f9c61932d012341f6bedfb2da96d1601f6de520198d691b25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:48:57 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb53554b1fc749f9c61932d012341f6bedfb2da96d1601f6de520198d691b25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:48:57 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb53554b1fc749f9c61932d012341f6bedfb2da96d1601f6de520198d691b25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:48:57 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb53554b1fc749f9c61932d012341f6bedfb2da96d1601f6de520198d691b25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:48:57 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb53554b1fc749f9c61932d012341f6bedfb2da96d1601f6de520198d691b25/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:48:57 np0005536586 podman[213467]: 2025-11-26 12:48:57.650570363 +0000 UTC m=+0.100049298 container init 78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 07:48:57 np0005536586 podman[213467]: 2025-11-26 12:48:57.658372578 +0000 UTC m=+0.107851513 container start 78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_zhukovsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 07:48:57 np0005536586 podman[213467]: 2025-11-26 12:48:57.659650395 +0000 UTC m=+0.109129329 container attach 78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:48:57 np0005536586 podman[213467]: 2025-11-26 12:48:57.570866788 +0000 UTC m=+0.020345742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:48:58 np0005536586 python3.9[213637]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:58 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:48:58 np0005536586 python3.9[213715]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:58 np0005536586 stoic_zhukovsky[213505]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:48:58 np0005536586 stoic_zhukovsky[213505]: --> relative data size: 1.0
Nov 26 07:48:58 np0005536586 stoic_zhukovsky[213505]: --> All data devices are unavailable
Nov 26 07:48:58 np0005536586 systemd[1]: libpod-78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b.scope: Deactivated successfully.
Nov 26 07:48:58 np0005536586 conmon[213505]: conmon 78aa78727c6bddc5be61 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b.scope/container/memory.events
Nov 26 07:48:58 np0005536586 podman[213467]: 2025-11-26 12:48:58.517726614 +0000 UTC m=+0.967205547 container died 78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:48:58 np0005536586 systemd[1]: var-lib-containers-storage-overlay-4bb53554b1fc749f9c61932d012341f6bedfb2da96d1601f6de520198d691b25-merged.mount: Deactivated successfully.
Nov 26 07:48:58 np0005536586 podman[213467]: 2025-11-26 12:48:58.551700952 +0000 UTC m=+1.001179886 container remove 78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:48:58 np0005536586 systemd[1]: libpod-conmon-78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b.scope: Deactivated successfully.
Nov 26 07:48:58 np0005536586 python3.9[213972]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:59 np0005536586 podman[214081]: 2025-11-26 12:48:59.033850969 +0000 UTC m=+0.033565770 container create 397661dd663a9b0bd369b936bb65a9ff77296f46d179bc6b03a491417b7c59ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_feistel, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 07:48:59 np0005536586 systemd[1]: Started libpod-conmon-397661dd663a9b0bd369b936bb65a9ff77296f46d179bc6b03a491417b7c59ee.scope.
Nov 26 07:48:59 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:48:59 np0005536586 podman[214081]: 2025-11-26 12:48:59.087697565 +0000 UTC m=+0.087412386 container init 397661dd663a9b0bd369b936bb65a9ff77296f46d179bc6b03a491417b7c59ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 07:48:59 np0005536586 podman[214081]: 2025-11-26 12:48:59.092519667 +0000 UTC m=+0.092234469 container start 397661dd663a9b0bd369b936bb65a9ff77296f46d179bc6b03a491417b7c59ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 07:48:59 np0005536586 podman[214081]: 2025-11-26 12:48:59.093809215 +0000 UTC m=+0.093524017 container attach 397661dd663a9b0bd369b936bb65a9ff77296f46d179bc6b03a491417b7c59ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_feistel, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 07:48:59 np0005536586 nervous_feistel[214124]: 167 167
Nov 26 07:48:59 np0005536586 systemd[1]: libpod-397661dd663a9b0bd369b936bb65a9ff77296f46d179bc6b03a491417b7c59ee.scope: Deactivated successfully.
Nov 26 07:48:59 np0005536586 podman[214081]: 2025-11-26 12:48:59.096530181 +0000 UTC m=+0.096244982 container died 397661dd663a9b0bd369b936bb65a9ff77296f46d179bc6b03a491417b7c59ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_feistel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:48:59 np0005536586 systemd[1]: var-lib-containers-storage-overlay-51f9e8a2786a9eb5bccdc8b2d69d52b0f58f2933cf3d89e088de9d5ca19ef34e-merged.mount: Deactivated successfully.
Nov 26 07:48:59 np0005536586 podman[214081]: 2025-11-26 12:48:59.117695304 +0000 UTC m=+0.117410106 container remove 397661dd663a9b0bd369b936bb65a9ff77296f46d179bc6b03a491417b7c59ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_feistel, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:48:59 np0005536586 podman[214081]: 2025-11-26 12:48:59.021148675 +0000 UTC m=+0.020863496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:48:59 np0005536586 systemd[1]: libpod-conmon-397661dd663a9b0bd369b936bb65a9ff77296f46d179bc6b03a491417b7c59ee.scope: Deactivated successfully.
Nov 26 07:48:59 np0005536586 python3.9[214126]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:48:59 np0005536586 podman[214147]: 2025-11-26 12:48:59.242146058 +0000 UTC m=+0.030759124 container create 293d5650866f29ead11f5053a2d8fe9dfca089576c141eb1cb88019242a4cd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_driscoll, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 07:48:59 np0005536586 systemd[1]: Started libpod-conmon-293d5650866f29ead11f5053a2d8fe9dfca089576c141eb1cb88019242a4cd6e.scope.
Nov 26 07:48:59 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:48:59 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf1701db1c0121820d87678b2f20557a91966b67feb7bff13c96032ef424d55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:48:59 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf1701db1c0121820d87678b2f20557a91966b67feb7bff13c96032ef424d55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:48:59 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf1701db1c0121820d87678b2f20557a91966b67feb7bff13c96032ef424d55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:48:59 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf1701db1c0121820d87678b2f20557a91966b67feb7bff13c96032ef424d55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:48:59 np0005536586 podman[214147]: 2025-11-26 12:48:59.311261433 +0000 UTC m=+0.099874498 container init 293d5650866f29ead11f5053a2d8fe9dfca089576c141eb1cb88019242a4cd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_driscoll, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:48:59 np0005536586 podman[214147]: 2025-11-26 12:48:59.316342583 +0000 UTC m=+0.104955648 container start 293d5650866f29ead11f5053a2d8fe9dfca089576c141eb1cb88019242a4cd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:48:59 np0005536586 podman[214147]: 2025-11-26 12:48:59.317995004 +0000 UTC m=+0.106608089 container attach 293d5650866f29ead11f5053a2d8fe9dfca089576c141eb1cb88019242a4cd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 07:48:59 np0005536586 podman[214147]: 2025-11-26 12:48:59.229606291 +0000 UTC m=+0.018219376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:48:59 np0005536586 python3.9[214316]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]: {
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:    "0": [
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:        {
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "devices": [
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "/dev/loop3"
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            ],
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "lv_name": "ceph_lv0",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "lv_size": "21470642176",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "name": "ceph_lv0",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "tags": {
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.cluster_name": "ceph",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.crush_device_class": "",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.encrypted": "0",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.osd_id": "0",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.type": "block",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.vdo": "0"
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            },
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "type": "block",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "vg_name": "ceph_vg0"
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:        }
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:    ],
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:    "1": [
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:        {
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "devices": [
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "/dev/loop4"
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            ],
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "lv_name": "ceph_lv1",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "lv_size": "21470642176",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "name": "ceph_lv1",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "tags": {
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.cluster_name": "ceph",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.crush_device_class": "",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.encrypted": "0",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.osd_id": "1",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.type": "block",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.vdo": "0"
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            },
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "type": "block",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "vg_name": "ceph_vg1"
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:        }
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:    ],
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:    "2": [
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:        {
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "devices": [
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "/dev/loop5"
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            ],
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "lv_name": "ceph_lv2",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "lv_size": "21470642176",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "name": "ceph_lv2",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "tags": {
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.cluster_name": "ceph",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.crush_device_class": "",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.encrypted": "0",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.osd_id": "2",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.type": "block",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:                "ceph.vdo": "0"
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            },
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "type": "block",
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:            "vg_name": "ceph_vg2"
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:        }
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]:    ]
Nov 26 07:48:59 np0005536586 amazing_driscoll[214172]: }
Nov 26 07:48:59 np0005536586 systemd[1]: libpod-293d5650866f29ead11f5053a2d8fe9dfca089576c141eb1cb88019242a4cd6e.scope: Deactivated successfully.
Nov 26 07:48:59 np0005536586 podman[214147]: 2025-11-26 12:48:59.972524276 +0000 UTC m=+0.761137342 container died 293d5650866f29ead11f5053a2d8fe9dfca089576c141eb1cb88019242a4cd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:48:59 np0005536586 systemd[1]: var-lib-containers-storage-overlay-eaf1701db1c0121820d87678b2f20557a91966b67feb7bff13c96032ef424d55-merged.mount: Deactivated successfully.
Nov 26 07:49:00 np0005536586 podman[214147]: 2025-11-26 12:49:00.009597952 +0000 UTC m=+0.798211018 container remove 293d5650866f29ead11f5053a2d8fe9dfca089576c141eb1cb88019242a4cd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_driscoll, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 07:49:00 np0005536586 systemd[1]: libpod-conmon-293d5650866f29ead11f5053a2d8fe9dfca089576c141eb1cb88019242a4cd6e.scope: Deactivated successfully.
Nov 26 07:49:00 np0005536586 python3.9[214396]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:00 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:00 np0005536586 podman[214693]: 2025-11-26 12:49:00.481405479 +0000 UTC m=+0.030697858 container create fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:49:00 np0005536586 systemd[1]: Started libpod-conmon-fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6.scope.
Nov 26 07:49:00 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:49:00 np0005536586 podman[214693]: 2025-11-26 12:49:00.540447892 +0000 UTC m=+0.089740280 container init fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carver, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:49:00 np0005536586 podman[214693]: 2025-11-26 12:49:00.54514582 +0000 UTC m=+0.094438198 container start fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:49:00 np0005536586 podman[214693]: 2025-11-26 12:49:00.548741642 +0000 UTC m=+0.098034041 container attach fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carver, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 07:49:00 np0005536586 dreamy_carver[214707]: 167 167
Nov 26 07:49:00 np0005536586 systemd[1]: libpod-fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6.scope: Deactivated successfully.
Nov 26 07:49:00 np0005536586 conmon[214707]: conmon fe9ccb92ad054c02a1c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6.scope/container/memory.events
Nov 26 07:49:00 np0005536586 podman[214693]: 2025-11-26 12:49:00.550536032 +0000 UTC m=+0.099828410 container died fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carver, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 07:49:00 np0005536586 systemd[1]: var-lib-containers-storage-overlay-b9fb4df5438f7582e0420d0042ad3eb4108c4b35e62ed45f88e4d72ad72de5f5-merged.mount: Deactivated successfully.
Nov 26 07:49:00 np0005536586 podman[214693]: 2025-11-26 12:49:00.469903567 +0000 UTC m=+0.019195964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:49:00 np0005536586 podman[214693]: 2025-11-26 12:49:00.574177389 +0000 UTC m=+0.123469768 container remove fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carver, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:49:00 np0005536586 systemd[1]: libpod-conmon-fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6.scope: Deactivated successfully.
Nov 26 07:49:00 np0005536586 python3.9[214688]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:49:00 np0005536586 podman[214732]: 2025-11-26 12:49:00.701899415 +0000 UTC m=+0.032875621 container create e13b5ed9a6674c8114cecc5cb0b8df17ecf365d3d48aaf6ea13bc9f571b8f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 07:49:00 np0005536586 systemd[1]: Started libpod-conmon-e13b5ed9a6674c8114cecc5cb0b8df17ecf365d3d48aaf6ea13bc9f571b8f802.scope.
Nov 26 07:49:00 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:49:00 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b05d86833688eb76ee363406a176588649d09e671b0e7951c0ea346a0575532/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:49:00 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b05d86833688eb76ee363406a176588649d09e671b0e7951c0ea346a0575532/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:49:00 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b05d86833688eb76ee363406a176588649d09e671b0e7951c0ea346a0575532/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:49:00 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b05d86833688eb76ee363406a176588649d09e671b0e7951c0ea346a0575532/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:49:00 np0005536586 podman[214732]: 2025-11-26 12:49:00.762002495 +0000 UTC m=+0.092978710 container init e13b5ed9a6674c8114cecc5cb0b8df17ecf365d3d48aaf6ea13bc9f571b8f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:49:00 np0005536586 podman[214732]: 2025-11-26 12:49:00.768868106 +0000 UTC m=+0.099844312 container start e13b5ed9a6674c8114cecc5cb0b8df17ecf365d3d48aaf6ea13bc9f571b8f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:49:00 np0005536586 podman[214732]: 2025-11-26 12:49:00.770524155 +0000 UTC m=+0.101500361 container attach e13b5ed9a6674c8114cecc5cb0b8df17ecf365d3d48aaf6ea13bc9f571b8f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:49:00 np0005536586 podman[214732]: 2025-11-26 12:49:00.688615817 +0000 UTC m=+0.019592042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:49:00 np0005536586 python3.9[214824]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:49:01 np0005536586 python3.9[214978]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]: {
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:        "osd_id": 1,
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:        "type": "bluestore"
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:    },
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:        "osd_id": 2,
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:        "type": "bluestore"
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:    },
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:        "osd_id": 0,
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:        "type": "bluestore"
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]:    }
Nov 26 07:49:01 np0005536586 pensive_margulis[214781]: }
Nov 26 07:49:01 np0005536586 systemd[1]: libpod-e13b5ed9a6674c8114cecc5cb0b8df17ecf365d3d48aaf6ea13bc9f571b8f802.scope: Deactivated successfully.
Nov 26 07:49:01 np0005536586 podman[214732]: 2025-11-26 12:49:01.573909918 +0000 UTC m=+0.904886144 container died e13b5ed9a6674c8114cecc5cb0b8df17ecf365d3d48aaf6ea13bc9f571b8f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:49:01 np0005536586 systemd[1]: var-lib-containers-storage-overlay-9b05d86833688eb76ee363406a176588649d09e671b0e7951c0ea346a0575532-merged.mount: Deactivated successfully.
Nov 26 07:49:01 np0005536586 podman[214732]: 2025-11-26 12:49:01.606518294 +0000 UTC m=+0.937494499 container remove e13b5ed9a6674c8114cecc5cb0b8df17ecf365d3d48aaf6ea13bc9f571b8f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:49:01 np0005536586 systemd[1]: libpod-conmon-e13b5ed9a6674c8114cecc5cb0b8df17ecf365d3d48aaf6ea13bc9f571b8f802.scope: Deactivated successfully.
Nov 26 07:49:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:49:01 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:49:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:49:01 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:49:01 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 243bd9ad-061c-4ece-87e5-2db84f01aca2 does not exist
Nov 26 07:49:01 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 36444402-dd88-4af0-ab34-5d73ba9349d4 does not exist
Nov 26 07:49:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:49:01.722 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:49:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:49:01.723 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:49:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:49:01.724 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:49:01 np0005536586 python3.9[215190]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161341.103489-1313-150961457938639/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:02 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:02 np0005536586 podman[215342]: 2025-11-26 12:49:02.261419796 +0000 UTC m=+0.039753985 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 26 07:49:02 np0005536586 python3.9[215343]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:02 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:49:02 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:49:02 np0005536586 python3.9[215509]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:49:03 np0005536586 python3.9[215664]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:03 np0005536586 python3.9[215816]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:49:04 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:04 np0005536586 python3.9[215969]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:49:04 np0005536586 python3.9[216123]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:49:05 np0005536586 python3.9[216278]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:05 np0005536586 python3.9[216430]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:49:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:49:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:49:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:49:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:49:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:49:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:49:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:49:06 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:06 np0005536586 python3.9[216553]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161345.502659-1385-108800985136086/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:06 np0005536586 python3.9[216705]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:49:07 np0005536586 python3.9[216828]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161346.3463955-1400-108563032484030/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:07 np0005536586 python3.9[216980]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:49:08 np0005536586 python3.9[217103]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161347.3039613-1415-173157662984290/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:08 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:08 np0005536586 python3.9[217255]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:49:08 np0005536586 systemd[1]: Reloading.
Nov 26 07:49:08 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:49:08 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:49:08 np0005536586 systemd[1]: Reached target edpm_libvirt.target.
Nov 26 07:49:09 np0005536586 python3.9[217445]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 26 07:49:09 np0005536586 systemd[1]: Reloading.
Nov 26 07:49:09 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:49:09 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:49:09 np0005536586 systemd[1]: Reloading.
Nov 26 07:49:09 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:49:09 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:49:10 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Nov 26 07:49:10 np0005536586 systemd-logind[777]: Session 48 logged out. Waiting for processes to exit.
Nov 26 07:49:10 np0005536586 systemd[1]: session-48.scope: Deactivated successfully.
Nov 26 07:49:10 np0005536586 systemd[1]: session-48.scope: Consumed 2min 32.767s CPU time.
Nov 26 07:49:10 np0005536586 systemd-logind[777]: Removed session 48.
Nov 26 07:49:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:49:12 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 07:49:12 np0005536586 podman[217542]: 2025-11-26 12:49:12.888033996 +0000 UTC m=+0.056345032 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 07:49:14 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 07:49:15 np0005536586 systemd-logind[777]: New session 49 of user zuul.
Nov 26 07:49:15 np0005536586 systemd[1]: Started Session 49 of User zuul.
Nov 26 07:49:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:49:16 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 07:49:16 np0005536586 python3.9[217718]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:49:17 np0005536586 python3.9[217872]: ansible-ansible.builtin.service_facts Invoked
Nov 26 07:49:17 np0005536586 network[217889]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 07:49:17 np0005536586 network[217890]: 'network-scripts' will be removed from distribution in near future.
Nov 26 07:49:17 np0005536586 network[217891]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 07:49:18 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 07:49:20 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 07:49:20 np0005536586 python3.9[218163]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 07:49:20 np0005536586 python3.9[218247]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:49:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:49:22 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Nov 26 07:49:24 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:25 np0005536586 python3.9[218400]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:49:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:49:26 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:26 np0005536586 python3.9[218552]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:49:26 np0005536586 python3.9[218705]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:49:27 np0005536586 python3.9[218857]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:49:27 np0005536586 python3.9[219010]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:49:28 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:28 np0005536586 python3.9[219133]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161367.3617265-95-40634847495217/.source.iscsi _original_basename=.5y93t886 follow=False checksum=37953765cb33ad82de40a0a37e146a9224f7da4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:28 np0005536586 python3.9[219285]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:29 np0005536586 python3.9[219437]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:30 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:30 np0005536586 python3.9[219589]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:49:30 np0005536586 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 26 07:49:30 np0005536586 python3.9[219745]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:49:30 np0005536586 systemd[1]: Reloading.
Nov 26 07:49:30 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:49:30 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:49:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:49:31 np0005536586 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 26 07:49:31 np0005536586 systemd[1]: Starting Open-iSCSI...
Nov 26 07:49:31 np0005536586 kernel: Loading iSCSI transport class v2.0-870.
Nov 26 07:49:31 np0005536586 systemd[1]: Started Open-iSCSI.
Nov 26 07:49:31 np0005536586 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 26 07:49:31 np0005536586 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 26 07:49:31 np0005536586 python3.9[219946]: ansible-ansible.builtin.service_facts Invoked
Nov 26 07:49:31 np0005536586 network[219963]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 07:49:31 np0005536586 network[219964]: 'network-scripts' will be removed from distribution in near future.
Nov 26 07:49:31 np0005536586 network[219965]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 07:49:32 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:32 np0005536586 podman[219972]: 2025-11-26 12:49:32.606545737 +0000 UTC m=+0.039973639 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Nov 26 07:49:34 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:34 np0005536586 python3.9[220253]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 26 07:49:35 np0005536586 python3.9[220405]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 26 07:49:35 np0005536586 python3.9[220561]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:49:35
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.rgw.root', 'backups', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.data']
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:49:35 np0005536586 python3.9[220684]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161375.2105477-172-14801099994985/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:49:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:49:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:49:36 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:36 np0005536586 python3.9[220836]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:37 np0005536586 python3.9[220988]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:49:37 np0005536586 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 26 07:49:37 np0005536586 systemd[1]: Stopped Load Kernel Modules.
Nov 26 07:49:37 np0005536586 systemd[1]: Stopping Load Kernel Modules...
Nov 26 07:49:37 np0005536586 systemd[1]: Starting Load Kernel Modules...
Nov 26 07:49:37 np0005536586 systemd[1]: Finished Load Kernel Modules.
Nov 26 07:49:37 np0005536586 python3.9[221144]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:49:38 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:38 np0005536586 python3.9[221296]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:49:38 np0005536586 python3.9[221448]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:49:39 np0005536586 python3.9[221600]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:49:39 np0005536586 python3.9[221723]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161378.8255506-230-194717949847816/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:39 np0005536586 python3.9[221875]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:49:40 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:40 np0005536586 python3.9[222028]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:41 np0005536586 python3.9[222180]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:49:41 np0005536586 python3.9[222332]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:41 np0005536586 python3.9[222484]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:42 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:42 np0005536586 python3.9[222636]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:42 np0005536586 python3.9[222788]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:43 np0005536586 podman[222912]: 2025-11-26 12:49:43.140372366 +0000 UTC m=+0.063478840 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller)
Nov 26 07:49:43 np0005536586 python3.9[222957]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:43 np0005536586 python3.9[223115]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:49:44 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:44 np0005536586 python3.9[223269]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:44 np0005536586 python3.9[223421]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:49:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 07:49:45 np0005536586 python3.9[223573]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:49:45 np0005536586 python3.9[223651]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:49:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:49:46 np0005536586 python3.9[223803]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:49:46 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:46 np0005536586 python3.9[223881]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:49:46 np0005536586 python3.9[224033]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:47 np0005536586 python3.9[224185]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:49:47 np0005536586 python3.9[224263]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:48 np0005536586 python3.9[224415]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:49:48 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:48 np0005536586 python3.9[224493]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:48 np0005536586 python3.9[224645]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:49:48 np0005536586 systemd[1]: Reloading.
Nov 26 07:49:49 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:49:49 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:49:49 np0005536586 python3.9[224833]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:49:50 np0005536586 python3.9[224911]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:50 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:50 np0005536586 python3.9[225063]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:49:50 np0005536586 python3.9[225141]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:49:51 np0005536586 python3.9[225293]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:49:51 np0005536586 systemd[1]: Reloading.
Nov 26 07:49:51 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:49:51 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:49:51 np0005536586 systemd[1]: Starting Create netns directory...
Nov 26 07:49:51 np0005536586 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 07:49:51 np0005536586 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 07:49:51 np0005536586 systemd[1]: Finished Create netns directory.
Nov 26 07:49:52 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:52 np0005536586 python3.9[225485]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:49:52 np0005536586 python3.9[225637]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:49:53 np0005536586 python3.9[225760]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161392.4157836-437-180485806795733/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:49:53 np0005536586 python3.9[225912]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:49:54 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:54 np0005536586 python3.9[226064]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:49:54 np0005536586 python3.9[226187]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161393.8781934-462-199446952216618/.source.json _original_basename=.hatuph8k follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:55 np0005536586 python3.9[226339]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:49:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:49:56 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:56 np0005536586 python3.9[226766]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 26 07:49:57 np0005536586 python3.9[226918]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 07:49:57 np0005536586 python3.9[227070]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 26 07:49:58 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:49:58 np0005536586 python3[227241]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 07:50:00 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:00 np0005536586 podman[227252]: 2025-11-26 12:50:00.423867713 +0000 UTC m=+1.511643241 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 26 07:50:00 np0005536586 podman[227297]: 2025-11-26 12:50:00.55607657 +0000 UTC m=+0.033579438 container create fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:50:00 np0005536586 podman[227297]: 2025-11-26 12:50:00.539145526 +0000 UTC m=+0.016648423 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 26 07:50:00 np0005536586 python3[227241]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 26 07:50:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:50:01 np0005536586 python3.9[227477]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:50:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:50:01.724 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:50:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:50:01.725 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:50:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:50:01.725 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:50:01 np0005536586 python3.9[227631]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:02 np0005536586 python3.9[227807]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:50:02 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:02 np0005536586 podman[227917]: 2025-11-26 12:50:02.310696687 +0000 UTC m=+0.055156449 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 07:50:02 np0005536586 podman[227917]: 2025-11-26 12:50:02.390253501 +0000 UTC m=+0.134713263 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 07:50:02 np0005536586 python3.9[228067]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764161402.1583614-550-216499144774527/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:02 np0005536586 podman[228088]: 2025-11-26 12:50:02.692554613 +0000 UTC m=+0.054639065 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 07:50:02 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:50:02 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:50:02 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:50:02 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:50:03 np0005536586 python3.9[228237]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 07:50:03 np0005536586 systemd[1]: Reloading.
Nov 26 07:50:03 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:50:03 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:50:03 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:50:03 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:50:03 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:50:03 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:50:03 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:50:03 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:50:03 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:50:03 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:50:03 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 842686d2-2fff-4076-9c56-91403ce010df does not exist
Nov 26 07:50:03 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev e968660b-3a79-4702-85a5-0089076802c6 does not exist
Nov 26 07:50:03 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev cb478dc7-42d3-4908-8ba3-444a6ae24db9 does not exist
Nov 26 07:50:03 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:50:03 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:50:03 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:50:03 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:50:03 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:50:03 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:50:03 np0005536586 python3.9[228543]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:50:04 np0005536586 systemd[1]: Reloading.
Nov 26 07:50:04 np0005536586 podman[228610]: 2025-11-26 12:50:04.086904936 +0000 UTC m=+0.044686438 container create 2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:50:04 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:04 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:50:04 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:50:04 np0005536586 podman[228610]: 2025-11-26 12:50:04.067860874 +0000 UTC m=+0.025642396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:50:04 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:50:04 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:50:04 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:50:04 np0005536586 systemd[1]: Started libpod-conmon-2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7.scope.
Nov 26 07:50:04 np0005536586 systemd[1]: Starting multipathd container...
Nov 26 07:50:04 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:50:04 np0005536586 podman[228610]: 2025-11-26 12:50:04.370415171 +0000 UTC m=+0.328196694 container init 2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 07:50:04 np0005536586 podman[228610]: 2025-11-26 12:50:04.377790647 +0000 UTC m=+0.335572149 container start 2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:50:04 np0005536586 podman[228610]: 2025-11-26 12:50:04.381057677 +0000 UTC m=+0.338839199 container attach 2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 07:50:04 np0005536586 systemd[1]: libpod-2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7.scope: Deactivated successfully.
Nov 26 07:50:04 np0005536586 peaceful_mahavira[228659]: 167 167
Nov 26 07:50:04 np0005536586 conmon[228659]: conmon 2bb6433c28429467e6a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7.scope/container/memory.events
Nov 26 07:50:04 np0005536586 podman[228672]: 2025-11-26 12:50:04.430833956 +0000 UTC m=+0.026269857 container died 2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 07:50:04 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:50:04 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8f06bab68b21e3fd5367ae24e1029ab44e8c537fab9cac34e1390bdb0ebe49/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 07:50:04 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8f06bab68b21e3fd5367ae24e1029ab44e8c537fab9cac34e1390bdb0ebe49/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 07:50:04 np0005536586 systemd[1]: var-lib-containers-storage-overlay-00cd57d77db9aa25c86b96fe345dda1552d71e29f386740550d31a36292565d4-merged.mount: Deactivated successfully.
Nov 26 07:50:04 np0005536586 podman[228672]: 2025-11-26 12:50:04.459162027 +0000 UTC m=+0.054597928 container remove 2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 07:50:04 np0005536586 systemd[1]: Started /usr/bin/podman healthcheck run fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636.
Nov 26 07:50:04 np0005536586 systemd[1]: libpod-conmon-2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7.scope: Deactivated successfully.
Nov 26 07:50:04 np0005536586 podman[228661]: 2025-11-26 12:50:04.473602872 +0000 UTC m=+0.107875888 container init fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 07:50:04 np0005536586 multipathd[228682]: + sudo -E kolla_set_configs
Nov 26 07:50:04 np0005536586 podman[228661]: 2025-11-26 12:50:04.49245287 +0000 UTC m=+0.126725886 container start fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 26 07:50:04 np0005536586 podman[228661]: multipathd
Nov 26 07:50:04 np0005536586 systemd[1]: Started multipathd container.
Nov 26 07:50:04 np0005536586 multipathd[228682]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 07:50:04 np0005536586 multipathd[228682]: INFO:__main__:Validating config file
Nov 26 07:50:04 np0005536586 multipathd[228682]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 07:50:04 np0005536586 multipathd[228682]: INFO:__main__:Writing out command to execute
Nov 26 07:50:04 np0005536586 multipathd[228682]: ++ cat /run_command
Nov 26 07:50:04 np0005536586 multipathd[228682]: + CMD='/usr/sbin/multipathd -d'
Nov 26 07:50:04 np0005536586 multipathd[228682]: + ARGS=
Nov 26 07:50:04 np0005536586 multipathd[228682]: + sudo kolla_copy_cacerts
Nov 26 07:50:04 np0005536586 multipathd[228682]: + [[ ! -n '' ]]
Nov 26 07:50:04 np0005536586 multipathd[228682]: + . kolla_extend_start
Nov 26 07:50:04 np0005536586 multipathd[228682]: Running command: '/usr/sbin/multipathd -d'
Nov 26 07:50:04 np0005536586 multipathd[228682]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 26 07:50:04 np0005536586 multipathd[228682]: + umask 0022
Nov 26 07:50:04 np0005536586 multipathd[228682]: + exec /usr/sbin/multipathd -d
Nov 26 07:50:04 np0005536586 podman[228696]: 2025-11-26 12:50:04.60948831 +0000 UTC m=+0.101418942 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 26 07:50:04 np0005536586 multipathd[228682]: 2506.228861 | --------start up--------
Nov 26 07:50:04 np0005536586 multipathd[228682]: 2506.228875 | read /etc/multipath.conf
Nov 26 07:50:04 np0005536586 systemd[1]: fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636-2b53ac392e683dac.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 07:50:04 np0005536586 multipathd[228682]: 2506.233896 | path checkers start up
Nov 26 07:50:04 np0005536586 systemd[1]: fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636-2b53ac392e683dac.service: Failed with result 'exit-code'.
Nov 26 07:50:04 np0005536586 podman[228738]: 2025-11-26 12:50:04.650527571 +0000 UTC m=+0.057702843 container create d0158ce6598fcb19cdfdf6d73058b1a8f6372a9c54bbdedf362ad32051203213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bassi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:50:04 np0005536586 systemd[1]: Started libpod-conmon-d0158ce6598fcb19cdfdf6d73058b1a8f6372a9c54bbdedf362ad32051203213.scope.
Nov 26 07:50:04 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:50:04 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5f48dc6746f11e5faacf8cdadba3aa69aecc440bf5674d18ebb96c5eee17c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:50:04 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5f48dc6746f11e5faacf8cdadba3aa69aecc440bf5674d18ebb96c5eee17c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:50:04 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5f48dc6746f11e5faacf8cdadba3aa69aecc440bf5674d18ebb96c5eee17c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:50:04 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5f48dc6746f11e5faacf8cdadba3aa69aecc440bf5674d18ebb96c5eee17c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:50:04 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5f48dc6746f11e5faacf8cdadba3aa69aecc440bf5674d18ebb96c5eee17c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:50:04 np0005536586 podman[228738]: 2025-11-26 12:50:04.724982279 +0000 UTC m=+0.132157572 container init d0158ce6598fcb19cdfdf6d73058b1a8f6372a9c54bbdedf362ad32051203213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bassi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 07:50:04 np0005536586 podman[228738]: 2025-11-26 12:50:04.631354856 +0000 UTC m=+0.038530148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:50:04 np0005536586 podman[228738]: 2025-11-26 12:50:04.732278946 +0000 UTC m=+0.139454217 container start d0158ce6598fcb19cdfdf6d73058b1a8f6372a9c54bbdedf362ad32051203213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 07:50:04 np0005536586 podman[228738]: 2025-11-26 12:50:04.733574555 +0000 UTC m=+0.140749827 container attach d0158ce6598fcb19cdfdf6d73058b1a8f6372a9c54bbdedf362ad32051203213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bassi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 07:50:05 np0005536586 python3.9[228898]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:50:05 np0005536586 python3.9[229063]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:50:05 np0005536586 intelligent_bassi[228786]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:50:05 np0005536586 intelligent_bassi[228786]: --> relative data size: 1.0
Nov 26 07:50:05 np0005536586 intelligent_bassi[228786]: --> All data devices are unavailable
Nov 26 07:50:05 np0005536586 systemd[1]: libpod-d0158ce6598fcb19cdfdf6d73058b1a8f6372a9c54bbdedf362ad32051203213.scope: Deactivated successfully.
Nov 26 07:50:05 np0005536586 podman[228738]: 2025-11-26 12:50:05.624426252 +0000 UTC m=+1.031601524 container died d0158ce6598fcb19cdfdf6d73058b1a8f6372a9c54bbdedf362ad32051203213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:50:05 np0005536586 systemd[1]: var-lib-containers-storage-overlay-9e5f48dc6746f11e5faacf8cdadba3aa69aecc440bf5674d18ebb96c5eee17c4-merged.mount: Deactivated successfully.
Nov 26 07:50:05 np0005536586 podman[228738]: 2025-11-26 12:50:05.667870951 +0000 UTC m=+1.075046223 container remove d0158ce6598fcb19cdfdf6d73058b1a8f6372a9c54bbdedf362ad32051203213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bassi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 07:50:05 np0005536586 systemd[1]: libpod-conmon-d0158ce6598fcb19cdfdf6d73058b1a8f6372a9c54bbdedf362ad32051203213.scope: Deactivated successfully.
Nov 26 07:50:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:50:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:50:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:50:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:50:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:50:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:50:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:50:06 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:06 np0005536586 podman[229378]: 2025-11-26 12:50:06.202496066 +0000 UTC m=+0.037641204 container create deab93b67465f8035838c2b0396b339056426251e8b6e8d68ec51d2291894679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 07:50:06 np0005536586 systemd[1]: Started libpod-conmon-deab93b67465f8035838c2b0396b339056426251e8b6e8d68ec51d2291894679.scope.
Nov 26 07:50:06 np0005536586 python3.9[229348]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:50:06 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:50:06 np0005536586 podman[229378]: 2025-11-26 12:50:06.277165438 +0000 UTC m=+0.112310596 container init deab93b67465f8035838c2b0396b339056426251e8b6e8d68ec51d2291894679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 07:50:06 np0005536586 podman[229378]: 2025-11-26 12:50:06.18726374 +0000 UTC m=+0.022408908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:50:06 np0005536586 podman[229378]: 2025-11-26 12:50:06.283692085 +0000 UTC m=+0.118837224 container start deab93b67465f8035838c2b0396b339056426251e8b6e8d68ec51d2291894679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_proskuriakova, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 07:50:06 np0005536586 podman[229378]: 2025-11-26 12:50:06.284898877 +0000 UTC m=+0.120044016 container attach deab93b67465f8035838c2b0396b339056426251e8b6e8d68ec51d2291894679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:50:06 np0005536586 condescending_proskuriakova[229391]: 167 167
Nov 26 07:50:06 np0005536586 podman[229378]: 2025-11-26 12:50:06.287918923 +0000 UTC m=+0.123064061 container died deab93b67465f8035838c2b0396b339056426251e8b6e8d68ec51d2291894679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 07:50:06 np0005536586 systemd[1]: libpod-deab93b67465f8035838c2b0396b339056426251e8b6e8d68ec51d2291894679.scope: Deactivated successfully.
Nov 26 07:50:06 np0005536586 systemd[1]: Stopping multipathd container...
Nov 26 07:50:06 np0005536586 systemd[1]: var-lib-containers-storage-overlay-9aface5e9abb9e150b8425e12b2a79bfa83ad77e4450bebe56d9c56fc248b133-merged.mount: Deactivated successfully.
Nov 26 07:50:06 np0005536586 podman[229378]: 2025-11-26 12:50:06.316446068 +0000 UTC m=+0.151591206 container remove deab93b67465f8035838c2b0396b339056426251e8b6e8d68ec51d2291894679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_proskuriakova, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 07:50:06 np0005536586 systemd[1]: libpod-conmon-deab93b67465f8035838c2b0396b339056426251e8b6e8d68ec51d2291894679.scope: Deactivated successfully.
Nov 26 07:50:06 np0005536586 multipathd[228682]: 2507.971208 | exit (signal)
Nov 26 07:50:06 np0005536586 multipathd[228682]: 2507.971655 | --------shut down-------
Nov 26 07:50:06 np0005536586 systemd[1]: libpod-fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636.scope: Deactivated successfully.
Nov 26 07:50:06 np0005536586 podman[229400]: 2025-11-26 12:50:06.377195926 +0000 UTC m=+0.063298066 container died fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:50:06 np0005536586 systemd[1]: fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636-2b53ac392e683dac.timer: Deactivated successfully.
Nov 26 07:50:06 np0005536586 systemd[1]: Stopped /usr/bin/podman healthcheck run fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636.
Nov 26 07:50:06 np0005536586 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636-userdata-shm.mount: Deactivated successfully.
Nov 26 07:50:06 np0005536586 systemd[1]: var-lib-containers-storage-overlay-0a8f06bab68b21e3fd5367ae24e1029ab44e8c537fab9cac34e1390bdb0ebe49-merged.mount: Deactivated successfully.
Nov 26 07:50:06 np0005536586 podman[229400]: 2025-11-26 12:50:06.429145316 +0000 UTC m=+0.115247456 container cleanup fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 07:50:06 np0005536586 podman[229400]: multipathd
Nov 26 07:50:06 np0005536586 podman[229438]: 2025-11-26 12:50:06.472360102 +0000 UTC m=+0.036193738 container create d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 07:50:06 np0005536586 podman[229447]: multipathd
Nov 26 07:50:06 np0005536586 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 26 07:50:06 np0005536586 systemd[1]: Stopped multipathd container.
Nov 26 07:50:06 np0005536586 systemd[1]: Starting multipathd container...
Nov 26 07:50:06 np0005536586 systemd[1]: Started libpod-conmon-d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469.scope.
Nov 26 07:50:06 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:50:06 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301badca497ba2ff74888eae09fb817777ca250724f1e3b28c9d5b2c038510ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:50:06 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301badca497ba2ff74888eae09fb817777ca250724f1e3b28c9d5b2c038510ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:50:06 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301badca497ba2ff74888eae09fb817777ca250724f1e3b28c9d5b2c038510ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:50:06 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301badca497ba2ff74888eae09fb817777ca250724f1e3b28c9d5b2c038510ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:50:06 np0005536586 podman[229438]: 2025-11-26 12:50:06.55323646 +0000 UTC m=+0.117070097 container init d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_agnesi, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 07:50:06 np0005536586 podman[229438]: 2025-11-26 12:50:06.458972238 +0000 UTC m=+0.022805895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:50:06 np0005536586 podman[229438]: 2025-11-26 12:50:06.560714688 +0000 UTC m=+0.124548325 container start d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_agnesi, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 07:50:06 np0005536586 podman[229438]: 2025-11-26 12:50:06.562068116 +0000 UTC m=+0.125901753 container attach d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 07:50:06 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:50:06 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8f06bab68b21e3fd5367ae24e1029ab44e8c537fab9cac34e1390bdb0ebe49/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 07:50:06 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8f06bab68b21e3fd5367ae24e1029ab44e8c537fab9cac34e1390bdb0ebe49/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 07:50:06 np0005536586 systemd[1]: Started /usr/bin/podman healthcheck run fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636.
Nov 26 07:50:06 np0005536586 podman[229462]: 2025-11-26 12:50:06.627565962 +0000 UTC m=+0.092452952 container init fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:50:06 np0005536586 multipathd[229479]: + sudo -E kolla_set_configs
Nov 26 07:50:06 np0005536586 podman[229462]: 2025-11-26 12:50:06.661791946 +0000 UTC m=+0.126678926 container start fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 07:50:06 np0005536586 podman[229462]: multipathd
Nov 26 07:50:06 np0005536586 systemd[1]: Started multipathd container.
Nov 26 07:50:06 np0005536586 multipathd[229479]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 07:50:06 np0005536586 multipathd[229479]: INFO:__main__:Validating config file
Nov 26 07:50:06 np0005536586 multipathd[229479]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 07:50:06 np0005536586 multipathd[229479]: INFO:__main__:Writing out command to execute
Nov 26 07:50:06 np0005536586 multipathd[229479]: ++ cat /run_command
Nov 26 07:50:06 np0005536586 multipathd[229479]: + CMD='/usr/sbin/multipathd -d'
Nov 26 07:50:06 np0005536586 multipathd[229479]: + ARGS=
Nov 26 07:50:06 np0005536586 multipathd[229479]: + sudo kolla_copy_cacerts
Nov 26 07:50:06 np0005536586 podman[229486]: 2025-11-26 12:50:06.723507692 +0000 UTC m=+0.063570128 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 07:50:06 np0005536586 multipathd[229479]: + [[ ! -n '' ]]
Nov 26 07:50:06 np0005536586 multipathd[229479]: + . kolla_extend_start
Nov 26 07:50:06 np0005536586 multipathd[229479]: Running command: '/usr/sbin/multipathd -d'
Nov 26 07:50:06 np0005536586 multipathd[229479]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 26 07:50:06 np0005536586 multipathd[229479]: + umask 0022
Nov 26 07:50:06 np0005536586 multipathd[229479]: + exec /usr/sbin/multipathd -d
Nov 26 07:50:06 np0005536586 systemd[1]: fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636-15acf925f6a3cf08.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 07:50:06 np0005536586 systemd[1]: fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636-15acf925f6a3cf08.service: Failed with result 'exit-code'.
Nov 26 07:50:06 np0005536586 multipathd[229479]: 2508.355303 | --------start up--------
Nov 26 07:50:06 np0005536586 multipathd[229479]: 2508.355371 | read /etc/multipath.conf
Nov 26 07:50:06 np0005536586 multipathd[229479]: 2508.360705 | path checkers start up
Nov 26 07:50:07 np0005536586 python3.9[229667]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:07 np0005536586 great_agnesi[229463]: {
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:    "0": [
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:        {
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "devices": [
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "/dev/loop3"
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            ],
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "lv_name": "ceph_lv0",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "lv_size": "21470642176",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "name": "ceph_lv0",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "tags": {
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.cluster_name": "ceph",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.crush_device_class": "",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.encrypted": "0",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.osd_id": "0",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.type": "block",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.vdo": "0"
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            },
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "type": "block",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "vg_name": "ceph_vg0"
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:        }
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:    ],
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:    "1": [
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:        {
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "devices": [
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "/dev/loop4"
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            ],
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "lv_name": "ceph_lv1",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "lv_size": "21470642176",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "name": "ceph_lv1",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "tags": {
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.cluster_name": "ceph",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.crush_device_class": "",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.encrypted": "0",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.osd_id": "1",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.type": "block",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.vdo": "0"
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            },
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "type": "block",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "vg_name": "ceph_vg1"
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:        }
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:    ],
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:    "2": [
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:        {
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "devices": [
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "/dev/loop5"
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            ],
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "lv_name": "ceph_lv2",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "lv_size": "21470642176",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "name": "ceph_lv2",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "tags": {
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.cluster_name": "ceph",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.crush_device_class": "",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.encrypted": "0",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.osd_id": "2",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.type": "block",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:                "ceph.vdo": "0"
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            },
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "type": "block",
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:            "vg_name": "ceph_vg2"
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:        }
Nov 26 07:50:07 np0005536586 great_agnesi[229463]:    ]
Nov 26 07:50:07 np0005536586 great_agnesi[229463]: }
Nov 26 07:50:07 np0005536586 systemd[1]: libpod-d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469.scope: Deactivated successfully.
Nov 26 07:50:07 np0005536586 conmon[229463]: conmon d0e44d8ace9d3a4666fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469.scope/container/memory.events
Nov 26 07:50:07 np0005536586 podman[229438]: 2025-11-26 12:50:07.263731367 +0000 UTC m=+0.827565004 container died d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_agnesi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:50:07 np0005536586 systemd[1]: var-lib-containers-storage-overlay-301badca497ba2ff74888eae09fb817777ca250724f1e3b28c9d5b2c038510ac-merged.mount: Deactivated successfully.
Nov 26 07:50:07 np0005536586 podman[229438]: 2025-11-26 12:50:07.295713817 +0000 UTC m=+0.859547454 container remove d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:50:07 np0005536586 systemd[1]: libpod-conmon-d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469.scope: Deactivated successfully.
Nov 26 07:50:07 np0005536586 podman[229964]: 2025-11-26 12:50:07.744572759 +0000 UTC m=+0.027524349 container create 52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:50:07 np0005536586 systemd[1]: Started libpod-conmon-52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00.scope.
Nov 26 07:50:07 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:50:07 np0005536586 podman[229964]: 2025-11-26 12:50:07.801068708 +0000 UTC m=+0.084020298 container init 52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 07:50:07 np0005536586 podman[229964]: 2025-11-26 12:50:07.80584506 +0000 UTC m=+0.088796650 container start 52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:50:07 np0005536586 eager_edison[229977]: 167 167
Nov 26 07:50:07 np0005536586 systemd[1]: libpod-52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00.scope: Deactivated successfully.
Nov 26 07:50:07 np0005536586 conmon[229977]: conmon 52223a633795e845e42d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00.scope/container/memory.events
Nov 26 07:50:07 np0005536586 podman[229964]: 2025-11-26 12:50:07.811199941 +0000 UTC m=+0.094151550 container attach 52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 07:50:07 np0005536586 podman[229964]: 2025-11-26 12:50:07.811410858 +0000 UTC m=+0.094362448 container died 52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:50:07 np0005536586 systemd[1]: var-lib-containers-storage-overlay-ad9b1dd525522c83e616199e712b5ecdca17a7fe6a0e5c5f47b10678bf50be8a-merged.mount: Deactivated successfully.
Nov 26 07:50:07 np0005536586 podman[229964]: 2025-11-26 12:50:07.73370545 +0000 UTC m=+0.016657059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:50:07 np0005536586 podman[229964]: 2025-11-26 12:50:07.834275393 +0000 UTC m=+0.117226983 container remove 52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:50:07 np0005536586 systemd[1]: libpod-conmon-52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00.scope: Deactivated successfully.
Nov 26 07:50:07 np0005536586 python3.9[229961]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 26 07:50:07 np0005536586 podman[230022]: 2025-11-26 12:50:07.957144517 +0000 UTC m=+0.030150282 container create 46573e9cd4203a22ac3c77ba8709ca3e8a9160ff298d7e1e2a473a4b1d2adf8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sammet, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:50:07 np0005536586 systemd[1]: Started libpod-conmon-46573e9cd4203a22ac3c77ba8709ca3e8a9160ff298d7e1e2a473a4b1d2adf8e.scope.
Nov 26 07:50:07 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:50:08 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be84675484252c526c4406838029db029137523379c6bf2ac5e1509bc40099b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:50:08 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be84675484252c526c4406838029db029137523379c6bf2ac5e1509bc40099b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:50:08 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be84675484252c526c4406838029db029137523379c6bf2ac5e1509bc40099b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:50:08 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be84675484252c526c4406838029db029137523379c6bf2ac5e1509bc40099b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:50:08 np0005536586 podman[230022]: 2025-11-26 12:50:08.012487925 +0000 UTC m=+0.085493701 container init 46573e9cd4203a22ac3c77ba8709ca3e8a9160ff298d7e1e2a473a4b1d2adf8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sammet, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:50:08 np0005536586 podman[230022]: 2025-11-26 12:50:08.018177166 +0000 UTC m=+0.091182932 container start 46573e9cd4203a22ac3c77ba8709ca3e8a9160ff298d7e1e2a473a4b1d2adf8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sammet, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 07:50:08 np0005536586 podman[230022]: 2025-11-26 12:50:08.019249716 +0000 UTC m=+0.092255481 container attach 46573e9cd4203a22ac3c77ba8709ca3e8a9160ff298d7e1e2a473a4b1d2adf8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:50:08 np0005536586 podman[230022]: 2025-11-26 12:50:07.945037714 +0000 UTC m=+0.018043501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:50:08 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:08 np0005536586 python3.9[230167]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 26 07:50:08 np0005536586 kernel: Key type psk registered
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]: {
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:        "osd_id": 1,
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:        "type": "bluestore"
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:    },
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:        "osd_id": 2,
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:        "type": "bluestore"
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:    },
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:        "osd_id": 0,
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:        "type": "bluestore"
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]:    }
Nov 26 07:50:08 np0005536586 vibrant_sammet[230055]: }
Nov 26 07:50:08 np0005536586 systemd[1]: libpod-46573e9cd4203a22ac3c77ba8709ca3e8a9160ff298d7e1e2a473a4b1d2adf8e.scope: Deactivated successfully.
Nov 26 07:50:08 np0005536586 podman[230022]: 2025-11-26 12:50:08.798984684 +0000 UTC m=+0.871990449 container died 46573e9cd4203a22ac3c77ba8709ca3e8a9160ff298d7e1e2a473a4b1d2adf8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sammet, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:50:08 np0005536586 systemd[1]: var-lib-containers-storage-overlay-6be84675484252c526c4406838029db029137523379c6bf2ac5e1509bc40099b-merged.mount: Deactivated successfully.
Nov 26 07:50:08 np0005536586 podman[230022]: 2025-11-26 12:50:08.832683525 +0000 UTC m=+0.905689291 container remove 46573e9cd4203a22ac3c77ba8709ca3e8a9160ff298d7e1e2a473a4b1d2adf8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:50:08 np0005536586 systemd[1]: libpod-conmon-46573e9cd4203a22ac3c77ba8709ca3e8a9160ff298d7e1e2a473a4b1d2adf8e.scope: Deactivated successfully.
Nov 26 07:50:08 np0005536586 python3.9[230344]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:50:08 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:50:08 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:50:08 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:50:08 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:50:08 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 726340b3-d55a-4ff4-9473-a116fecd744e does not exist
Nov 26 07:50:08 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 09b07daf-b843-4bff-bfdf-6641048db691 does not exist
Nov 26 07:50:09 np0005536586 python3.9[230540]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161408.4976869-630-144457494151988/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:09 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:50:09 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:50:09 np0005536586 python3.9[230692]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:10 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:10 np0005536586 python3.9[230844]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:50:10 np0005536586 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 26 07:50:10 np0005536586 systemd[1]: Stopped Load Kernel Modules.
Nov 26 07:50:10 np0005536586 systemd[1]: Stopping Load Kernel Modules...
Nov 26 07:50:10 np0005536586 systemd[1]: Starting Load Kernel Modules...
Nov 26 07:50:10 np0005536586 systemd[1]: Finished Load Kernel Modules.
Nov 26 07:50:10 np0005536586 python3.9[231000]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 07:50:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:50:12 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:12 np0005536586 systemd[1]: Reloading.
Nov 26 07:50:12 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:50:12 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:50:13 np0005536586 systemd[1]: Reloading.
Nov 26 07:50:13 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:50:13 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:50:13 np0005536586 podman[231077]: 2025-11-26 12:50:13.488401397 +0000 UTC m=+0.063441636 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 07:50:13 np0005536586 systemd-logind[777]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 26 07:50:13 np0005536586 lvm[231131]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 07:50:13 np0005536586 lvm[231131]: VG ceph_vg1 finished
Nov 26 07:50:13 np0005536586 lvm[231134]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 07:50:13 np0005536586 lvm[231134]: VG ceph_vg0 finished
Nov 26 07:50:13 np0005536586 lvm[231135]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 07:50:13 np0005536586 lvm[231135]: VG ceph_vg2 finished
Nov 26 07:50:13 np0005536586 systemd-logind[777]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 26 07:50:13 np0005536586 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 07:50:13 np0005536586 systemd[1]: Starting man-db-cache-update.service...
Nov 26 07:50:13 np0005536586 systemd[1]: Reloading.
Nov 26 07:50:13 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:50:13 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:50:14 np0005536586 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 07:50:14 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:14 np0005536586 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 07:50:14 np0005536586 systemd[1]: Finished man-db-cache-update.service.
Nov 26 07:50:14 np0005536586 systemd[1]: man-db-cache-update.service: Consumed 1.042s CPU time.
Nov 26 07:50:14 np0005536586 systemd[1]: run-rccb0fbeeea4c4cfcb490d5fad865620c.service: Deactivated successfully.
Nov 26 07:50:14 np0005536586 python3.9[232495]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:50:14 np0005536586 systemd[1]: Stopping Open-iSCSI...
Nov 26 07:50:14 np0005536586 iscsid[219785]: iscsid shutting down.
Nov 26 07:50:14 np0005536586 systemd[1]: iscsid.service: Deactivated successfully.
Nov 26 07:50:14 np0005536586 systemd[1]: Stopped Open-iSCSI.
Nov 26 07:50:14 np0005536586 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 26 07:50:14 np0005536586 systemd[1]: Starting Open-iSCSI...
Nov 26 07:50:14 np0005536586 systemd[1]: Started Open-iSCSI.
Nov 26 07:50:15 np0005536586 python3.9[232650]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 07:50:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:50:16 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:16 np0005536586 python3.9[232806]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:16 np0005536586 python3.9[232958]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 07:50:16 np0005536586 systemd[1]: Reloading.
Nov 26 07:50:16 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:50:16 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:50:17 np0005536586 python3.9[233144]: ansible-ansible.builtin.service_facts Invoked
Nov 26 07:50:17 np0005536586 network[233161]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 07:50:17 np0005536586 network[233162]: 'network-scripts' will be removed from distribution in near future.
Nov 26 07:50:17 np0005536586 network[233163]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 07:50:18 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:20 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:20 np0005536586 python3.9[233438]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:50:20 np0005536586 python3.9[233591]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:50:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:50:21 np0005536586 python3.9[233744]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:50:21 np0005536586 python3.9[233897]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:50:22 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:22 np0005536586 python3.9[234050]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:50:22 np0005536586 python3.9[234203]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:50:23 np0005536586 python3.9[234356]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:50:24 np0005536586 python3.9[234509]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:50:24 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:24 np0005536586 python3.9[234662]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:25 np0005536586 python3.9[234814]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:25 np0005536586 python3.9[234966]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:50:26 np0005536586 python3.9[235118]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:26 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:26 np0005536586 python3.9[235270]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:26 np0005536586 python3.9[235422]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:27 np0005536586 python3.9[235574]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:27 np0005536586 python3.9[235726]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:28 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:28 np0005536586 python3.9[235878]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:28 np0005536586 python3.9[236030]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:29 np0005536586 python3.9[236182]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:29 np0005536586 python3.9[236334]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:30 np0005536586 python3.9[236486]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:30 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:30 np0005536586 python3.9[236638]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:30 np0005536586 python3.9[236790]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:50:31 np0005536586 python3.9[236942]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:31 np0005536586 python3.9[237094]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:50:32 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:32 np0005536586 python3.9[237246]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 07:50:32 np0005536586 podman[237370]: 2025-11-26 12:50:32.794466083 +0000 UTC m=+0.047891432 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 07:50:33 np0005536586 python3.9[237412]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 07:50:33 np0005536586 systemd[1]: Reloading.
Nov 26 07:50:33 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:50:33 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:50:33 np0005536586 python3.9[237601]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:50:34 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:34 np0005536586 python3.9[237754]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:50:34 np0005536586 python3.9[237907]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:50:35 np0005536586 python3.9[238060]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:50:35 np0005536586 python3.9[238213]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:50:35
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'volumes', '.mgr', '.rgw.root', 'backups', 'default.rgw.control', 'vms', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data']
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:50:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:50:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:50:36 np0005536586 python3.9[238366]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:50:36 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:36 np0005536586 python3.9[238519]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:50:36 np0005536586 podman[238672]: 2025-11-26 12:50:36.800345249 +0000 UTC m=+0.046050576 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 26 07:50:36 np0005536586 python3.9[238673]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 07:50:37 np0005536586 python3.9[238843]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:50:38 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:38 np0005536586 python3.9[238995]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:50:38 np0005536586 python3.9[239147]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:50:39 np0005536586 python3.9[239299]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:50:39 np0005536586 python3.9[239451]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:50:40 np0005536586 python3.9[239603]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:50:40 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:40 np0005536586 python3.9[239755]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:50:40 np0005536586 python3.9[239907]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:50:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:50:41 np0005536586 python3.9[240059]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:50:41 np0005536586 python3.9[240211]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:50:42 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:43 np0005536586 podman[240236]: 2025-11-26 12:50:43.882278722 +0000 UTC m=+0.051528378 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 26 07:50:44 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:44 np0005536586 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 26 07:50:45 np0005536586 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:50:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 07:50:45 np0005536586 python3.9[240388]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 26 07:50:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:50:46 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:46 np0005536586 python3.9[240541]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 07:50:47 np0005536586 python3.9[240699]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 26 07:50:47 np0005536586 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 07:50:47 np0005536586 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 07:50:47 np0005536586 systemd-logind[777]: New session 50 of user zuul.
Nov 26 07:50:47 np0005536586 systemd[1]: Started Session 50 of User zuul.
Nov 26 07:50:47 np0005536586 systemd[1]: session-50.scope: Deactivated successfully.
Nov 26 07:50:47 np0005536586 systemd-logind[777]: Session 50 logged out. Waiting for processes to exit.
Nov 26 07:50:47 np0005536586 systemd-logind[777]: Removed session 50.
Nov 26 07:50:48 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:48 np0005536586 python3.9[240886]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:50:48 np0005536586 python3.9[241007]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161448.1077352-1249-252507792139306/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:50:49 np0005536586 python3.9[241157]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:50:49 np0005536586 python3.9[241233]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:50:49 np0005536586 python3.9[241383]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:50:50 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:50 np0005536586 python3.9[241504]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161449.6667883-1249-214256897861032/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:50:50 np0005536586 python3.9[241654]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:50:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:50:51 np0005536586 python3.9[241775]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161450.447445-1249-42044207191133/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:50:51 np0005536586 python3.9[241925]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:50:51 np0005536586 python3.9[242046]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161451.230874-1249-34548819520463/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:50:52 np0005536586 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 26 07:50:52 np0005536586 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 26 07:50:52 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:52 np0005536586 python3.9[242198]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:50:52 np0005536586 python3.9[242319]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161452.0037158-1249-174235084244132/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:50:53 np0005536586 python3.9[242471]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:53 np0005536586 python3.9[242623]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:50:54 np0005536586 python3.9[242775]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:50:54 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:54 np0005536586 python3.9[242927]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:50:54 np0005536586 python3.9[243050]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764161454.162896-1356-74291724971731/.source _original_basename=.pf2iqhf3 follow=False checksum=590b3b4ab7698d0274add06521820e666b67a1e5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 26 07:50:55 np0005536586 python3.9[243202]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:50:55 np0005536586 python3.9[243354]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:50:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:50:56 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:56 np0005536586 python3.9[243475]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161455.501877-1382-31479373457118/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=4c77b2c041a7564aa2c84115117dc8517e9bb9ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:50:56 np0005536586 python3.9[243625]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 07:50:57 np0005536586 python3.9[243746]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161456.3783662-1397-101906590068266/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=941d5739094d046b86479403aeaaf0441b82ba11 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 07:50:57 np0005536586 python3.9[243898]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 26 07:50:58 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:50:58 np0005536586 python3.9[244050]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 07:50:58 np0005536586 python3[244202]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 07:51:00 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:51:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:51:01.726 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:51:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:51:01.727 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:51:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:51:01.727 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:51:02 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:04 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:05 np0005536586 podman[244248]: 2025-11-26 12:51:05.623803316 +0000 UTC m=+2.785487840 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 26 07:51:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:51:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:51:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:51:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:51:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:51:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:51:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:51:06 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:08 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:09 np0005536586 podman[244278]: 2025-11-26 12:51:09.322946305 +0000 UTC m=+2.491938734 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 07:51:09 np0005536586 podman[244213]: 2025-11-26 12:51:09.345434736 +0000 UTC m=+10.401701178 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 26 07:51:09 np0005536586 podman[244427]: 2025-11-26 12:51:09.447006878 +0000 UTC m=+0.031014943 container create 919277d59aea2048c4b8b971af9c276cffe3574f965720e5798921af9b487d73 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible)
Nov 26 07:51:09 np0005536586 podman[244427]: 2025-11-26 12:51:09.43312491 +0000 UTC m=+0.017132985 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 26 07:51:09 np0005536586 python3[244202]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 26 07:51:09 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:51:09 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:51:09 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:51:09 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:51:09 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:51:09 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:51:09 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 4a1f5183-bb3f-4197-8241-e40ebd38a161 does not exist
Nov 26 07:51:09 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 4ed8f87b-28ab-49ca-a546-1d9a56b4ecbc does not exist
Nov 26 07:51:09 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 67eca5af-87bf-4653-b122-7f56ebd5e606 does not exist
Nov 26 07:51:09 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:51:09 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:51:09 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:51:09 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:51:09 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:51:09 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:51:09 np0005536586 podman[244753]: 2025-11-26 12:51:09.990545481 +0000 UTC m=+0.030272944 container create f4f61f18ba00b028bfa668dfa950ff3eda7e9e251c809df8af4f3bc26946c724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:51:10 np0005536586 python3.9[244728]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:51:10 np0005536586 systemd[1]: Started libpod-conmon-f4f61f18ba00b028bfa668dfa950ff3eda7e9e251c809df8af4f3bc26946c724.scope.
Nov 26 07:51:10 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:51:10 np0005536586 podman[244753]: 2025-11-26 12:51:10.053895676 +0000 UTC m=+0.093623150 container init f4f61f18ba00b028bfa668dfa950ff3eda7e9e251c809df8af4f3bc26946c724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 07:51:10 np0005536586 podman[244753]: 2025-11-26 12:51:10.059857965 +0000 UTC m=+0.099585428 container start f4f61f18ba00b028bfa668dfa950ff3eda7e9e251c809df8af4f3bc26946c724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:51:10 np0005536586 podman[244753]: 2025-11-26 12:51:10.062281724 +0000 UTC m=+0.102009186 container attach f4f61f18ba00b028bfa668dfa950ff3eda7e9e251c809df8af4f3bc26946c724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 07:51:10 np0005536586 gifted_chaum[244768]: 167 167
Nov 26 07:51:10 np0005536586 systemd[1]: libpod-f4f61f18ba00b028bfa668dfa950ff3eda7e9e251c809df8af4f3bc26946c724.scope: Deactivated successfully.
Nov 26 07:51:10 np0005536586 podman[244753]: 2025-11-26 12:51:10.065750663 +0000 UTC m=+0.105478125 container died f4f61f18ba00b028bfa668dfa950ff3eda7e9e251c809df8af4f3bc26946c724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 07:51:10 np0005536586 podman[244753]: 2025-11-26 12:51:09.977827849 +0000 UTC m=+0.017555332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:51:10 np0005536586 systemd[1]: var-lib-containers-storage-overlay-53cd74ac09b2639127fe1b4f3a7c747288ccab34b8f7a32441b176545d8ef8a2-merged.mount: Deactivated successfully.
Nov 26 07:51:10 np0005536586 podman[244753]: 2025-11-26 12:51:10.085126195 +0000 UTC m=+0.124853658 container remove f4f61f18ba00b028bfa668dfa950ff3eda7e9e251c809df8af4f3bc26946c724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:51:10 np0005536586 systemd[1]: libpod-conmon-f4f61f18ba00b028bfa668dfa950ff3eda7e9e251c809df8af4f3bc26946c724.scope: Deactivated successfully.
Nov 26 07:51:10 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:10 np0005536586 podman[244814]: 2025-11-26 12:51:10.215275635 +0000 UTC m=+0.034137310 container create 992ffd94b85e3a7fedc43561dcbe5602d4e260e9764b3873cc92fd761f60fdd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 07:51:10 np0005536586 systemd[1]: Started libpod-conmon-992ffd94b85e3a7fedc43561dcbe5602d4e260e9764b3873cc92fd761f60fdd8.scope.
Nov 26 07:51:10 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:51:10 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22100547e6649899de4ce968394c18c2c204aa7fe53218db8fca4fcb0079206b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:10 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22100547e6649899de4ce968394c18c2c204aa7fe53218db8fca4fcb0079206b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:10 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22100547e6649899de4ce968394c18c2c204aa7fe53218db8fca4fcb0079206b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:10 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22100547e6649899de4ce968394c18c2c204aa7fe53218db8fca4fcb0079206b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:10 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22100547e6649899de4ce968394c18c2c204aa7fe53218db8fca4fcb0079206b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:10 np0005536586 podman[244814]: 2025-11-26 12:51:10.280438366 +0000 UTC m=+0.099300052 container init 992ffd94b85e3a7fedc43561dcbe5602d4e260e9764b3873cc92fd761f60fdd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 07:51:10 np0005536586 podman[244814]: 2025-11-26 12:51:10.286297962 +0000 UTC m=+0.105159637 container start 992ffd94b85e3a7fedc43561dcbe5602d4e260e9764b3873cc92fd761f60fdd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bardeen, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 07:51:10 np0005536586 podman[244814]: 2025-11-26 12:51:10.287370963 +0000 UTC m=+0.106232639 container attach 992ffd94b85e3a7fedc43561dcbe5602d4e260e9764b3873cc92fd761f60fdd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 07:51:10 np0005536586 podman[244814]: 2025-11-26 12:51:10.19840843 +0000 UTC m=+0.017270126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:51:10 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:51:10 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:51:10 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:51:10 np0005536586 python3.9[244960]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 26 07:51:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:51:11 np0005536586 keen_bardeen[244828]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:51:11 np0005536586 keen_bardeen[244828]: --> relative data size: 1.0
Nov 26 07:51:11 np0005536586 keen_bardeen[244828]: --> All data devices are unavailable
Nov 26 07:51:11 np0005536586 systemd[1]: libpod-992ffd94b85e3a7fedc43561dcbe5602d4e260e9764b3873cc92fd761f60fdd8.scope: Deactivated successfully.
Nov 26 07:51:11 np0005536586 podman[244814]: 2025-11-26 12:51:11.128500432 +0000 UTC m=+0.947362108 container died 992ffd94b85e3a7fedc43561dcbe5602d4e260e9764b3873cc92fd761f60fdd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:51:11 np0005536586 systemd[1]: var-lib-containers-storage-overlay-22100547e6649899de4ce968394c18c2c204aa7fe53218db8fca4fcb0079206b-merged.mount: Deactivated successfully.
Nov 26 07:51:11 np0005536586 podman[244814]: 2025-11-26 12:51:11.166277993 +0000 UTC m=+0.985139668 container remove 992ffd94b85e3a7fedc43561dcbe5602d4e260e9764b3873cc92fd761f60fdd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:51:11 np0005536586 systemd[1]: libpod-conmon-992ffd94b85e3a7fedc43561dcbe5602d4e260e9764b3873cc92fd761f60fdd8.scope: Deactivated successfully.
Nov 26 07:51:11 np0005536586 python3.9[245130]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 07:51:11 np0005536586 podman[245356]: 2025-11-26 12:51:11.611036677 +0000 UTC m=+0.030966862 container create d21573156a0fff5cb77d6a214fbd1d64dfbf747de9e404252ea2c4771bd8baad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:51:11 np0005536586 systemd[1]: Started libpod-conmon-d21573156a0fff5cb77d6a214fbd1d64dfbf747de9e404252ea2c4771bd8baad.scope.
Nov 26 07:51:11 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:51:11 np0005536586 podman[245356]: 2025-11-26 12:51:11.665827297 +0000 UTC m=+0.085757482 container init d21573156a0fff5cb77d6a214fbd1d64dfbf747de9e404252ea2c4771bd8baad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hypatia, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 07:51:11 np0005536586 podman[245356]: 2025-11-26 12:51:11.670844425 +0000 UTC m=+0.090774620 container start d21573156a0fff5cb77d6a214fbd1d64dfbf747de9e404252ea2c4771bd8baad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hypatia, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 07:51:11 np0005536586 podman[245356]: 2025-11-26 12:51:11.672133003 +0000 UTC m=+0.092063198 container attach d21573156a0fff5cb77d6a214fbd1d64dfbf747de9e404252ea2c4771bd8baad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hypatia, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 07:51:11 np0005536586 tender_hypatia[245407]: 167 167
Nov 26 07:51:11 np0005536586 systemd[1]: libpod-d21573156a0fff5cb77d6a214fbd1d64dfbf747de9e404252ea2c4771bd8baad.scope: Deactivated successfully.
Nov 26 07:51:11 np0005536586 podman[245356]: 2025-11-26 12:51:11.675040974 +0000 UTC m=+0.094971170 container died d21573156a0fff5cb77d6a214fbd1d64dfbf747de9e404252ea2c4771bd8baad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:51:11 np0005536586 systemd[1]: var-lib-containers-storage-overlay-46332e9446e31a676ec2c94a1f227370c019288698e193ecb754a668ced3ddbb-merged.mount: Deactivated successfully.
Nov 26 07:51:11 np0005536586 podman[245356]: 2025-11-26 12:51:11.695063647 +0000 UTC m=+0.114993843 container remove d21573156a0fff5cb77d6a214fbd1d64dfbf747de9e404252ea2c4771bd8baad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hypatia, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:51:11 np0005536586 podman[245356]: 2025-11-26 12:51:11.599024765 +0000 UTC m=+0.018954980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:51:11 np0005536586 systemd[1]: libpod-conmon-d21573156a0fff5cb77d6a214fbd1d64dfbf747de9e404252ea2c4771bd8baad.scope: Deactivated successfully.
Nov 26 07:51:11 np0005536586 podman[245464]: 2025-11-26 12:51:11.821265647 +0000 UTC m=+0.030830957 container create 3847d7ceb8a2e3b7d3df941bfba0ee42a39d92b2f3752d247a75e287d43c1059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wozniak, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 07:51:11 np0005536586 systemd[1]: Started libpod-conmon-3847d7ceb8a2e3b7d3df941bfba0ee42a39d92b2f3752d247a75e287d43c1059.scope.
Nov 26 07:51:11 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:51:11 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b208452bca47f00bd1d545580886eb180e086c2a9f5fb3e4e3380c7e015d2dcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:11 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b208452bca47f00bd1d545580886eb180e086c2a9f5fb3e4e3380c7e015d2dcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:11 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b208452bca47f00bd1d545580886eb180e086c2a9f5fb3e4e3380c7e015d2dcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:11 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b208452bca47f00bd1d545580886eb180e086c2a9f5fb3e4e3380c7e015d2dcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:11 np0005536586 podman[245464]: 2025-11-26 12:51:11.876306047 +0000 UTC m=+0.085871367 container init 3847d7ceb8a2e3b7d3df941bfba0ee42a39d92b2f3752d247a75e287d43c1059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wozniak, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:51:11 np0005536586 podman[245464]: 2025-11-26 12:51:11.88196258 +0000 UTC m=+0.091527890 container start 3847d7ceb8a2e3b7d3df941bfba0ee42a39d92b2f3752d247a75e287d43c1059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wozniak, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:51:11 np0005536586 podman[245464]: 2025-11-26 12:51:11.883204881 +0000 UTC m=+0.092770191 container attach 3847d7ceb8a2e3b7d3df941bfba0ee42a39d92b2f3752d247a75e287d43c1059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:51:11 np0005536586 podman[245464]: 2025-11-26 12:51:11.809391364 +0000 UTC m=+0.018956694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:51:11 np0005536586 python3[245458]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 07:51:12 np0005536586 podman[245511]: 2025-11-26 12:51:12.078359635 +0000 UTC m=+0.029711317 container create fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.build-date=20251118, tcib_managed=true, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 07:51:12 np0005536586 podman[245511]: 2025-11-26 12:51:12.064153467 +0000 UTC m=+0.015505168 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 26 07:51:12 np0005536586 python3[245458]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 kolla_start
Nov 26 07:51:12 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]: {
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:    "0": [
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:        {
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "devices": [
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "/dev/loop3"
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            ],
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "lv_name": "ceph_lv0",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "lv_size": "21470642176",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "name": "ceph_lv0",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "tags": {
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.cluster_name": "ceph",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.crush_device_class": "",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.encrypted": "0",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.osd_id": "0",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.type": "block",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.vdo": "0"
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            },
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "type": "block",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "vg_name": "ceph_vg0"
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:        }
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:    ],
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:    "1": [
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:        {
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "devices": [
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "/dev/loop4"
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            ],
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "lv_name": "ceph_lv1",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "lv_size": "21470642176",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "name": "ceph_lv1",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "tags": {
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.cluster_name": "ceph",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.crush_device_class": "",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.encrypted": "0",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.osd_id": "1",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.type": "block",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.vdo": "0"
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            },
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "type": "block",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "vg_name": "ceph_vg1"
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:        }
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:    ],
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:    "2": [
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:        {
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "devices": [
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "/dev/loop5"
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            ],
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "lv_name": "ceph_lv2",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "lv_size": "21470642176",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "name": "ceph_lv2",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "tags": {
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.cluster_name": "ceph",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.crush_device_class": "",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.encrypted": "0",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.osd_id": "2",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.type": "block",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:                "ceph.vdo": "0"
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            },
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "type": "block",
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:            "vg_name": "ceph_vg2"
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:        }
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]:    ]
Nov 26 07:51:12 np0005536586 peaceful_wozniak[245478]: }
Nov 26 07:51:12 np0005536586 systemd[1]: libpod-3847d7ceb8a2e3b7d3df941bfba0ee42a39d92b2f3752d247a75e287d43c1059.scope: Deactivated successfully.
Nov 26 07:51:12 np0005536586 podman[245464]: 2025-11-26 12:51:12.526291812 +0000 UTC m=+0.735857122 container died 3847d7ceb8a2e3b7d3df941bfba0ee42a39d92b2f3752d247a75e287d43c1059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:51:12 np0005536586 systemd[1]: var-lib-containers-storage-overlay-b208452bca47f00bd1d545580886eb180e086c2a9f5fb3e4e3380c7e015d2dcb-merged.mount: Deactivated successfully.
Nov 26 07:51:12 np0005536586 podman[245464]: 2025-11-26 12:51:12.560279448 +0000 UTC m=+0.769844758 container remove 3847d7ceb8a2e3b7d3df941bfba0ee42a39d92b2f3752d247a75e287d43c1059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wozniak, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:51:12 np0005536586 systemd[1]: libpod-conmon-3847d7ceb8a2e3b7d3df941bfba0ee42a39d92b2f3752d247a75e287d43c1059.scope: Deactivated successfully.
Nov 26 07:51:12 np0005536586 python3.9[245696]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:51:13 np0005536586 podman[245970]: 2025-11-26 12:51:13.026490101 +0000 UTC m=+0.033682983 container create ff763317ba150e891d53dcc9e4a70fbeebef07c2c00ca50edff60823c2153d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 07:51:13 np0005536586 systemd[1]: Started libpod-conmon-ff763317ba150e891d53dcc9e4a70fbeebef07c2c00ca50edff60823c2153d7e.scope.
Nov 26 07:51:13 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:51:13 np0005536586 podman[245970]: 2025-11-26 12:51:13.09417209 +0000 UTC m=+0.101364993 container init ff763317ba150e891d53dcc9e4a70fbeebef07c2c00ca50edff60823c2153d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 07:51:13 np0005536586 podman[245970]: 2025-11-26 12:51:13.099531041 +0000 UTC m=+0.106723924 container start ff763317ba150e891d53dcc9e4a70fbeebef07c2c00ca50edff60823c2153d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:51:13 np0005536586 podman[245970]: 2025-11-26 12:51:13.101008546 +0000 UTC m=+0.108201430 container attach ff763317ba150e891d53dcc9e4a70fbeebef07c2c00ca50edff60823c2153d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 07:51:13 np0005536586 wizardly_lovelace[246004]: 167 167
Nov 26 07:51:13 np0005536586 systemd[1]: libpod-ff763317ba150e891d53dcc9e4a70fbeebef07c2c00ca50edff60823c2153d7e.scope: Deactivated successfully.
Nov 26 07:51:13 np0005536586 podman[245970]: 2025-11-26 12:51:13.104071449 +0000 UTC m=+0.111264342 container died ff763317ba150e891d53dcc9e4a70fbeebef07c2c00ca50edff60823c2153d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 07:51:13 np0005536586 podman[245970]: 2025-11-26 12:51:13.010833707 +0000 UTC m=+0.018026610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:51:13 np0005536586 systemd[1]: var-lib-containers-storage-overlay-2b058cb83ef9528cbabcd806c5e023b9a1f175c5ce7f8272ce274b24ae9ea8f8-merged.mount: Deactivated successfully.
Nov 26 07:51:13 np0005536586 podman[245970]: 2025-11-26 12:51:13.123566377 +0000 UTC m=+0.130759261 container remove ff763317ba150e891d53dcc9e4a70fbeebef07c2c00ca50edff60823c2153d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 07:51:13 np0005536586 systemd[1]: libpod-conmon-ff763317ba150e891d53dcc9e4a70fbeebef07c2c00ca50edff60823c2153d7e.scope: Deactivated successfully.
Nov 26 07:51:13 np0005536586 python3.9[246001]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:51:13 np0005536586 podman[246026]: 2025-11-26 12:51:13.252550151 +0000 UTC m=+0.032379016 container create 7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lumiere, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 26 07:51:13 np0005536586 systemd[1]: Started libpod-conmon-7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1.scope.
Nov 26 07:51:13 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:51:13 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f189f69d15ff8c11a48d538bb12a2ac0853e33c25af77b303cd8803fd4189d30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:13 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f189f69d15ff8c11a48d538bb12a2ac0853e33c25af77b303cd8803fd4189d30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:13 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f189f69d15ff8c11a48d538bb12a2ac0853e33c25af77b303cd8803fd4189d30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:13 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f189f69d15ff8c11a48d538bb12a2ac0853e33c25af77b303cd8803fd4189d30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:13 np0005536586 podman[246026]: 2025-11-26 12:51:13.308434782 +0000 UTC m=+0.088263657 container init 7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lumiere, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:51:13 np0005536586 podman[246026]: 2025-11-26 12:51:13.316240086 +0000 UTC m=+0.096068951 container start 7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:51:13 np0005536586 podman[246026]: 2025-11-26 12:51:13.317468741 +0000 UTC m=+0.097297597 container attach 7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:51:13 np0005536586 podman[246026]: 2025-11-26 12:51:13.239376268 +0000 UTC m=+0.019205153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:51:13 np0005536586 python3.9[246194]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764161473.2649786-1489-214859560833179/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]: {
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:        "osd_id": 1,
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:        "type": "bluestore"
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:    },
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:        "osd_id": 2,
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:        "type": "bluestore"
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:    },
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:        "osd_id": 0,
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:        "type": "bluestore"
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]:    }
Nov 26 07:51:14 np0005536586 elastic_lumiere[246068]: }
Nov 26 07:51:14 np0005536586 systemd[1]: libpod-7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1.scope: Deactivated successfully.
Nov 26 07:51:14 np0005536586 conmon[246068]: conmon 7f535d22077d7dbf988c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1.scope/container/memory.events
Nov 26 07:51:14 np0005536586 podman[246026]: 2025-11-26 12:51:14.08637544 +0000 UTC m=+0.866204315 container died 7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 07:51:14 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:14 np0005536586 systemd[1]: var-lib-containers-storage-overlay-f189f69d15ff8c11a48d538bb12a2ac0853e33c25af77b303cd8803fd4189d30-merged.mount: Deactivated successfully.
Nov 26 07:51:14 np0005536586 podman[246026]: 2025-11-26 12:51:14.897891789 +0000 UTC m=+1.677720654 container remove 7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lumiere, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 07:51:14 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:51:14 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:51:14 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:51:14 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:51:14 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev ac5fd1ce-6b5a-4792-aca9-5e65d51cc242 does not exist
Nov 26 07:51:14 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev c5cc539d-1948-48c5-913b-bf7ca63388f6 does not exist
Nov 26 07:51:14 np0005536586 systemd[1]: libpod-conmon-7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1.scope: Deactivated successfully.
Nov 26 07:51:15 np0005536586 podman[246299]: 2025-11-26 12:51:15.004749134 +0000 UTC m=+0.899908878 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 26 07:51:15 np0005536586 python3.9[246272]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 07:51:15 np0005536586 systemd[1]: Reloading.
Nov 26 07:51:15 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:51:15 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:51:15 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:51:15 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:51:15 np0005536586 python3.9[246494]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 07:51:15 np0005536586 systemd[1]: Reloading.
Nov 26 07:51:15 np0005536586 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 07:51:15 np0005536586 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 07:51:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:51:16 np0005536586 systemd[1]: Starting nova_compute container...
Nov 26 07:51:16 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:16 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:51:16 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:16 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:16 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:16 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:16 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:16 np0005536586 podman[246534]: 2025-11-26 12:51:16.203240134 +0000 UTC m=+0.071069415 container init fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Nov 26 07:51:16 np0005536586 podman[246534]: 2025-11-26 12:51:16.208051413 +0000 UTC m=+0.075880675 container start fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 07:51:16 np0005536586 podman[246534]: nova_compute
Nov 26 07:51:16 np0005536586 nova_compute[246546]: + sudo -E kolla_set_configs
Nov 26 07:51:16 np0005536586 systemd[1]: Started nova_compute container.
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Validating config file
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Copying service configuration files
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Deleting /etc/ceph
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Creating directory /etc/ceph
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Setting permission for /etc/ceph
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Writing out command to execute
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 07:51:16 np0005536586 nova_compute[246546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 07:51:16 np0005536586 nova_compute[246546]: ++ cat /run_command
Nov 26 07:51:16 np0005536586 nova_compute[246546]: + CMD=nova-compute
Nov 26 07:51:16 np0005536586 nova_compute[246546]: + ARGS=
Nov 26 07:51:16 np0005536586 nova_compute[246546]: + sudo kolla_copy_cacerts
Nov 26 07:51:16 np0005536586 nova_compute[246546]: + [[ ! -n '' ]]
Nov 26 07:51:16 np0005536586 nova_compute[246546]: + . kolla_extend_start
Nov 26 07:51:16 np0005536586 nova_compute[246546]: Running command: 'nova-compute'
Nov 26 07:51:16 np0005536586 nova_compute[246546]: + echo 'Running command: '\''nova-compute'\'''
Nov 26 07:51:16 np0005536586 nova_compute[246546]: + umask 0022
Nov 26 07:51:16 np0005536586 nova_compute[246546]: + exec nova-compute
Nov 26 07:51:16 np0005536586 python3.9[246707]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:51:17 np0005536586 python3.9[246858]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:51:17 np0005536586 python3.9[247008]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 07:51:18 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:18 np0005536586 nova_compute[246546]: 2025-11-26 12:51:18.249 246550 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 26 07:51:18 np0005536586 nova_compute[246546]: 2025-11-26 12:51:18.249 246550 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 26 07:51:18 np0005536586 nova_compute[246546]: 2025-11-26 12:51:18.249 246550 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 26 07:51:18 np0005536586 nova_compute[246546]: 2025-11-26 12:51:18.249 246550 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 26 07:51:18 np0005536586 nova_compute[246546]: 2025-11-26 12:51:18.370 246550 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 07:51:18 np0005536586 nova_compute[246546]: 2025-11-26 12:51:18.387 246550 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 07:51:18 np0005536586 nova_compute[246546]: 2025-11-26 12:51:18.387 246550 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 26 07:51:18 np0005536586 python3.9[247164]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 26 07:51:18 np0005536586 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 07:51:18 np0005536586 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 07:51:18 np0005536586 nova_compute[246546]: 2025-11-26 12:51:18.902 246550 INFO nova.virt.driver [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.014 246550 INFO nova.compute.provider_config [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.028 246550 DEBUG oslo_concurrency.lockutils [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.028 246550 DEBUG oslo_concurrency.lockutils [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.028 246550 DEBUG oslo_concurrency.lockutils [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.029 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.029 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.029 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.029 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.029 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.029 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.030 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.030 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.030 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.030 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.030 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.030 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.030 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.031 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.031 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.031 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.031 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.031 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.031 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.032 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.032 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.032 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.032 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.032 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.032 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.032 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.033 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.033 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.033 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.033 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.033 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.033 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.033 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.034 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.034 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.034 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.034 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.034 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.034 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.034 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.035 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.035 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.035 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.035 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.035 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.035 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.036 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.036 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.036 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.036 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.036 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.036 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.036 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.037 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.037 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.037 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.037 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.037 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.037 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.037 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.038 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.038 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.038 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.038 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.038 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.038 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.038 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.038 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.039 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.039 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.039 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.039 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.039 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.039 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.039 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.040 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.040 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.040 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.040 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.040 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.040 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.040 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.041 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.041 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.041 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.041 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.041 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.041 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.041 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.041 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.042 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.042 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.042 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.042 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.042 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.042 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.042 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.043 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.043 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.043 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.043 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.043 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.043 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.043 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.044 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.044 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.044 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.044 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.044 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.044 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.044 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.044 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.045 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.045 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.045 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.045 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.045 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.045 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.045 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.046 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.046 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.046 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.046 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.046 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.046 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.046 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.046 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.047 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.047 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.047 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.047 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.047 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.047 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.047 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.048 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.048 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.048 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.048 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.048 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.048 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.048 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.049 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.049 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.049 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.049 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.049 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.049 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.049 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.049 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.050 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.050 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.050 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.050 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.050 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.050 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.051 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.051 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.051 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.051 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.051 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.051 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.051 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.052 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.052 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.052 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.052 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.052 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.052 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.053 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.053 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.053 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.053 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.053 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.053 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.053 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.054 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.054 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.054 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.054 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.054 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.054 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.054 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.055 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.055 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.055 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.055 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.055 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.055 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.055 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.056 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.056 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.056 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.056 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.056 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.056 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.056 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.057 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.057 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.057 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.057 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.057 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.057 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.057 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.058 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.058 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.058 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.058 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.058 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.058 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.058 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.059 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.059 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.059 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.059 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.059 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.059 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.059 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.059 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.060 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.060 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.060 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.060 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.060 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.060 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.060 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.061 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.061 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.061 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.061 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.061 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.061 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.062 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.062 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.062 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.062 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.062 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.062 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.063 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.063 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.063 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.063 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.063 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.063 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.063 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.064 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.064 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.064 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.064 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.064 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.064 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.064 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.065 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.065 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.065 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.065 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.065 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.065 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.065 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.066 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.066 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.066 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.066 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.066 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.066 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.066 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.067 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.067 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.067 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.067 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.067 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.067 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.067 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.068 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.068 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.068 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.068 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.068 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.068 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.068 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.069 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.069 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.069 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.069 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.069 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.069 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.069 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.070 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.070 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.070 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.070 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.070 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.070 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.070 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.071 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.071 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.071 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.071 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.071 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.071 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.071 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.071 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.072 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.072 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.072 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.072 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.072 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.072 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.072 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.073 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.073 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.073 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.073 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.073 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.073 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.073 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.074 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.074 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.074 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.074 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.074 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.074 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.074 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.075 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.075 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.075 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.075 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.075 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.075 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.075 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.076 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.076 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.076 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.076 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.076 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.076 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.076 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.076 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.077 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.077 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.077 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.077 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.077 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.077 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.077 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.078 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.078 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.078 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.078 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.078 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.078 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.078 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.079 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.079 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.079 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.079 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.079 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.079 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.080 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.080 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.080 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.080 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.080 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.080 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.080 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.081 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.081 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.081 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.081 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.081 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.081 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.081 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.082 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.082 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.082 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.082 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.082 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.082 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.082 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.083 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.083 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.083 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.083 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.083 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.083 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.083 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.083 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.084 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.084 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.084 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.084 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.084 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.084 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.084 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.085 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.085 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.085 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.085 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.085 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.085 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.085 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.086 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.086 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.086 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.086 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.086 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.086 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.086 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.087 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.087 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.087 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.087 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.087 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.087 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.087 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.088 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.088 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.088 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.088 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.088 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.088 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.088 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.089 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.089 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.089 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.089 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.089 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.089 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.089 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.090 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.090 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.090 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.090 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.090 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.090 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.090 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.091 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.091 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.091 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.091 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.091 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.091 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.091 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.091 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.092 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.092 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.092 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.092 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.092 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.092 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.092 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.093 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.093 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.093 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.093 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.093 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.093 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.093 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.094 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.094 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.094 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.094 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.094 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.094 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.094 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.095 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.095 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.095 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.095 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.095 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.095 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.095 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.096 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.096 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.096 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.096 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.096 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.096 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.096 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.097 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.097 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.097 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.097 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.097 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.097 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.097 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.097 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.098 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.098 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.098 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.098 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.098 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.098 246550 WARNING oslo_config.cfg [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 26 07:51:19 np0005536586 nova_compute[246546]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 26 07:51:19 np0005536586 nova_compute[246546]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 26 07:51:19 np0005536586 nova_compute[246546]: and ``live_migration_inbound_addr`` respectively.
Nov 26 07:51:19 np0005536586 nova_compute[246546]: ).  Its value may be silently ignored in the future.#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.099 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.099 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.099 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.099 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.099 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.099 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.100 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.100 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.100 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.100 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.100 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.100 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.100 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.101 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.101 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.101 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.101 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.101 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.101 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rbd_secret_uuid        = f7d7fe93-41e5-51c4-b72d-63b38686102e log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.101 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.102 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.102 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.102 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.102 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.102 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.102 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.102 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.103 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.103 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.103 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.103 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.103 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.103 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.104 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.104 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.104 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.104 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.104 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.104 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.104 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.105 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.105 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.105 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.105 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.105 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.105 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.105 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.106 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.106 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.106 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.106 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.106 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.106 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.106 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.107 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.107 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.107 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.107 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.107 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.107 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.107 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.107 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.108 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.108 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.108 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.108 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.108 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.108 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.108 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.109 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.109 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.109 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.109 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.109 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.109 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.109 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.110 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.110 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.110 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.110 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.110 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.110 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.110 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.111 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.111 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.111 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.111 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.111 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.111 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.111 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.112 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.112 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.112 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.112 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.112 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.112 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.112 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.112 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.113 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.113 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.113 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.113 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.113 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.113 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.113 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.114 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.114 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.114 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.114 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.114 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.114 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.114 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.115 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.115 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.115 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.115 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.115 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.115 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.115 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.115 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.116 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.116 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.116 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.116 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.116 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.116 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.116 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.117 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.117 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.117 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.117 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.117 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.117 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.118 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.118 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.118 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.118 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.118 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.118 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.118 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.119 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.119 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.119 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.119 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.119 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.119 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.119 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.120 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.120 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.120 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.120 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.120 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.120 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.120 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.121 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.121 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.121 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.121 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.121 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.121 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.121 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.122 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.122 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.122 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.122 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.122 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.122 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.122 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.123 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.123 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.123 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.123 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.123 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.123 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.123 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.124 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.124 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.124 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.124 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.124 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.124 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.125 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.125 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.125 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.125 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.125 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.125 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.125 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.126 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.126 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.126 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.126 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.126 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.126 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.126 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.126 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.127 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.127 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.127 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.127 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.127 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.127 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.128 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.128 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.128 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.128 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.128 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.128 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.128 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.128 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.129 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.129 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.129 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.129 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.129 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.129 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.129 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.130 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.130 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.130 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.130 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.130 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.130 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.130 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.131 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.131 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.131 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.131 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.131 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.131 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.131 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.132 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.132 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.132 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.132 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.132 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.132 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.132 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.132 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.133 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.133 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.133 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.133 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.133 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.133 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.133 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.134 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.134 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.134 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.134 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.134 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.134 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.135 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.135 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.135 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.135 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.135 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.135 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.135 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.136 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.136 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.136 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.136 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.136 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.136 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.136 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.137 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.137 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.137 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.137 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.137 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.137 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.137 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.137 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.138 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.138 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.138 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.138 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.138 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.138 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.138 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.139 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.139 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.139 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.139 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.139 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.139 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.139 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.140 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.140 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.140 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.140 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.140 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.140 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.140 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.141 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.141 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.141 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.141 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.141 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.141 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.141 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.142 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.142 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.142 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.142 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.142 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.142 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.142 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.143 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.143 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.143 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.143 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.143 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.143 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.143 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.144 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.144 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.144 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.144 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.144 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.144 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.144 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.145 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.145 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.145 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.145 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.145 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.145 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.145 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.145 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.146 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.146 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.146 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.146 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.146 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.146 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.146 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.147 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.147 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.147 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.147 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.147 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.147 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.147 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.148 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.148 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.148 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.148 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.148 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.148 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.148 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.148 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.149 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.149 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.149 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.149 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.149 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.149 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.149 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.150 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.150 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.150 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.150 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.150 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.150 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.150 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.150 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.151 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.151 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.151 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.151 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.151 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.151 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.151 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.152 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.152 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.152 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.152 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.152 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.152 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.152 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.153 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.153 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.153 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.153 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.153 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.153 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.153 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.154 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.154 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.154 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.154 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.154 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.154 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.154 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.154 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.155 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.155 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.155 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.155 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.155 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.155 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.155 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.156 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.156 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.156 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.156 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.156 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.156 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.156 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.157 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.157 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.157 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.157 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.157 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.157 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.157 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.157 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.158 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.158 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.158 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.158 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.158 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.159 246550 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.172 246550 DEBUG nova.virt.libvirt.host [None req-feb15756-7b00-4337-8c4d-7a0667580904 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.172 246550 DEBUG nova.virt.libvirt.host [None req-feb15756-7b00-4337-8c4d-7a0667580904 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.172 246550 DEBUG nova.virt.libvirt.host [None req-feb15756-7b00-4337-8c4d-7a0667580904 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.173 246550 DEBUG nova.virt.libvirt.host [None req-feb15756-7b00-4337-8c4d-7a0667580904 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 26 07:51:19 np0005536586 systemd[1]: Starting libvirt QEMU daemon...
Nov 26 07:51:19 np0005536586 systemd[1]: Started libvirt QEMU daemon.
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.240 246550 DEBUG nova.virt.libvirt.host [None req-feb15756-7b00-4337-8c4d-7a0667580904 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f5d79d7f9d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.242 246550 DEBUG nova.virt.libvirt.host [None req-feb15756-7b00-4337-8c4d-7a0667580904 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f5d79d7f9d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.243 246550 INFO nova.virt.libvirt.driver [None req-feb15756-7b00-4337-8c4d-7a0667580904 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.255 246550 WARNING nova.virt.libvirt.driver [None req-feb15756-7b00-4337-8c4d-7a0667580904 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.255 246550 DEBUG nova.virt.libvirt.volume.mount [None req-feb15756-7b00-4337-8c4d-7a0667580904 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 26 07:51:19 np0005536586 python3.9[247375]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 07:51:19 np0005536586 systemd[1]: Stopping nova_compute container...
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.583 246550 DEBUG oslo_concurrency.lockutils [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.583 246550 DEBUG oslo_concurrency.lockutils [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 07:51:19 np0005536586 nova_compute[246546]: 2025-11-26 12:51:19.584 246550 DEBUG oslo_concurrency.lockutils [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 07:51:20 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:20 np0005536586 virtqemud[247331]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 26 07:51:20 np0005536586 systemd[1]: libpod-fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86.scope: Deactivated successfully.
Nov 26 07:51:20 np0005536586 virtqemud[247331]: hostname: compute-0
Nov 26 07:51:20 np0005536586 systemd[1]: libpod-fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86.scope: Consumed 2.586s CPU time.
Nov 26 07:51:20 np0005536586 virtqemud[247331]: End of file while reading data: Input/output error
Nov 26 07:51:20 np0005536586 conmon[246546]: conmon fbc1fe4d8414fa081c74 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86.scope/container/memory.events
Nov 26 07:51:20 np0005536586 podman[247390]: 2025-11-26 12:51:20.310241912 +0000 UTC m=+0.762962706 container died fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.vendor=CentOS, container_name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 26 07:51:20 np0005536586 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86-userdata-shm.mount: Deactivated successfully.
Nov 26 07:51:20 np0005536586 systemd[1]: var-lib-containers-storage-overlay-c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca-merged.mount: Deactivated successfully.
Nov 26 07:51:20 np0005536586 podman[247390]: 2025-11-26 12:51:20.734677687 +0000 UTC m=+1.187398482 container cleanup fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:51:20 np0005536586 podman[247390]: nova_compute
Nov 26 07:51:20 np0005536586 podman[247422]: nova_compute
Nov 26 07:51:20 np0005536586 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 26 07:51:20 np0005536586 systemd[1]: Stopped nova_compute container.
Nov 26 07:51:20 np0005536586 systemd[1]: Starting nova_compute container...
Nov 26 07:51:20 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:51:20 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:20 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:20 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:20 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:20 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:20 np0005536586 podman[247431]: 2025-11-26 12:51:20.885889821 +0000 UTC m=+0.077176697 container init fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 26 07:51:20 np0005536586 podman[247431]: 2025-11-26 12:51:20.893959312 +0000 UTC m=+0.085246168 container start fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 07:51:20 np0005536586 podman[247431]: nova_compute
Nov 26 07:51:20 np0005536586 nova_compute[247443]: + sudo -E kolla_set_configs
Nov 26 07:51:20 np0005536586 systemd[1]: Started nova_compute container.
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Validating config file
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Copying service configuration files
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Deleting /etc/ceph
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Creating directory /etc/ceph
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Setting permission for /etc/ceph
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Writing out command to execute
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 07:51:20 np0005536586 nova_compute[247443]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 07:51:20 np0005536586 nova_compute[247443]: ++ cat /run_command
Nov 26 07:51:20 np0005536586 nova_compute[247443]: + CMD=nova-compute
Nov 26 07:51:20 np0005536586 nova_compute[247443]: + ARGS=
Nov 26 07:51:20 np0005536586 nova_compute[247443]: + sudo kolla_copy_cacerts
Nov 26 07:51:21 np0005536586 nova_compute[247443]: + [[ ! -n '' ]]
Nov 26 07:51:21 np0005536586 nova_compute[247443]: + . kolla_extend_start
Nov 26 07:51:21 np0005536586 nova_compute[247443]: Running command: 'nova-compute'
Nov 26 07:51:21 np0005536586 nova_compute[247443]: + echo 'Running command: '\''nova-compute'\'''
Nov 26 07:51:21 np0005536586 nova_compute[247443]: + umask 0022
Nov 26 07:51:21 np0005536586 nova_compute[247443]: + exec nova-compute
Nov 26 07:51:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:51:21 np0005536586 python3.9[247606]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 26 07:51:21 np0005536586 systemd[1]: Started libpod-conmon-919277d59aea2048c4b8b971af9c276cffe3574f965720e5798921af9b487d73.scope.
Nov 26 07:51:21 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:51:21 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63061608008ba0167fea61eeeb49b1e981373e0a439ce6ecff75ac95fb7d89e2/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:21 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63061608008ba0167fea61eeeb49b1e981373e0a439ce6ecff75ac95fb7d89e2/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:21 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63061608008ba0167fea61eeeb49b1e981373e0a439ce6ecff75ac95fb7d89e2/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 26 07:51:21 np0005536586 podman[247625]: 2025-11-26 12:51:21.686638673 +0000 UTC m=+0.094570254 container init 919277d59aea2048c4b8b971af9c276cffe3574f965720e5798921af9b487d73 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 26 07:51:21 np0005536586 podman[247625]: 2025-11-26 12:51:21.694153018 +0000 UTC m=+0.102084599 container start 919277d59aea2048c4b8b971af9c276cffe3574f965720e5798921af9b487d73 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 07:51:21 np0005536586 python3.9[247606]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 26 07:51:21 np0005536586 nova_compute_init[247644]: INFO:nova_statedir:Applying nova statedir ownership
Nov 26 07:51:21 np0005536586 nova_compute_init[247644]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 26 07:51:21 np0005536586 nova_compute_init[247644]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 26 07:51:21 np0005536586 nova_compute_init[247644]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 26 07:51:21 np0005536586 nova_compute_init[247644]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 26 07:51:21 np0005536586 nova_compute_init[247644]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 26 07:51:21 np0005536586 nova_compute_init[247644]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 26 07:51:21 np0005536586 nova_compute_init[247644]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 26 07:51:21 np0005536586 nova_compute_init[247644]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 26 07:51:21 np0005536586 nova_compute_init[247644]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 26 07:51:21 np0005536586 nova_compute_init[247644]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 26 07:51:21 np0005536586 nova_compute_init[247644]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 26 07:51:21 np0005536586 nova_compute_init[247644]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 26 07:51:21 np0005536586 nova_compute_init[247644]: INFO:nova_statedir:Nova statedir ownership complete
Nov 26 07:51:21 np0005536586 systemd[1]: libpod-919277d59aea2048c4b8b971af9c276cffe3574f965720e5798921af9b487d73.scope: Deactivated successfully.
Nov 26 07:51:21 np0005536586 podman[247645]: 2025-11-26 12:51:21.75461145 +0000 UTC m=+0.034053000 container died 919277d59aea2048c4b8b971af9c276cffe3574f965720e5798921af9b487d73 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute_init, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:51:21 np0005536586 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-919277d59aea2048c4b8b971af9c276cffe3574f965720e5798921af9b487d73-userdata-shm.mount: Deactivated successfully.
Nov 26 07:51:21 np0005536586 systemd[1]: var-lib-containers-storage-overlay-63061608008ba0167fea61eeeb49b1e981373e0a439ce6ecff75ac95fb7d89e2-merged.mount: Deactivated successfully.
Nov 26 07:51:21 np0005536586 podman[247653]: 2025-11-26 12:51:21.790751184 +0000 UTC m=+0.035015674 container cleanup 919277d59aea2048c4b8b971af9c276cffe3574f965720e5798921af9b487d73 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible)
Nov 26 07:51:21 np0005536586 systemd[1]: libpod-conmon-919277d59aea2048c4b8b971af9c276cffe3574f965720e5798921af9b487d73.scope: Deactivated successfully.
Nov 26 07:51:22 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:22 np0005536586 systemd[1]: session-49.scope: Deactivated successfully.
Nov 26 07:51:22 np0005536586 systemd[1]: session-49.scope: Consumed 1min 43.958s CPU time.
Nov 26 07:51:22 np0005536586 systemd-logind[777]: Session 49 logged out. Waiting for processes to exit.
Nov 26 07:51:22 np0005536586 systemd-logind[777]: Removed session 49.
Nov 26 07:51:22 np0005536586 nova_compute[247443]: 2025-11-26 12:51:22.684 247447 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 26 07:51:22 np0005536586 nova_compute[247443]: 2025-11-26 12:51:22.684 247447 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 26 07:51:22 np0005536586 nova_compute[247443]: 2025-11-26 12:51:22.685 247447 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 26 07:51:22 np0005536586 nova_compute[247443]: 2025-11-26 12:51:22.685 247447 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 26 07:51:22 np0005536586 nova_compute[247443]: 2025-11-26 12:51:22.799 247447 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 07:51:22 np0005536586 nova_compute[247443]: 2025-11-26 12:51:22.810 247447 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 07:51:22 np0005536586 nova_compute[247443]: 2025-11-26 12:51:22.810 247447 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.200 247447 INFO nova.virt.driver [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.286 247447 INFO nova.compute.provider_config [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.296 247447 DEBUG oslo_concurrency.lockutils [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.296 247447 DEBUG oslo_concurrency.lockutils [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.296 247447 DEBUG oslo_concurrency.lockutils [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.297 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.297 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.297 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.297 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.297 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.297 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.298 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.298 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.298 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.298 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.298 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.298 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.298 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.299 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.299 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.299 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.299 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.299 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.299 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.299 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.300 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.300 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.300 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.300 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.300 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.300 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.300 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.301 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.301 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.301 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.301 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.301 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.301 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.301 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.302 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.302 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.302 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.302 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.302 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.302 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.302 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.303 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.303 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.303 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.303 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.303 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.303 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.304 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.304 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.304 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.304 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.304 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.304 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.304 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.305 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.305 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.305 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.305 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.305 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.305 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.305 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.306 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.306 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.306 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.306 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.306 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.306 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.306 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.306 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.307 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.307 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.307 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.307 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.307 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.307 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.307 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.308 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.308 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.308 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.308 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.308 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.308 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.308 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.309 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.309 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.309 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.309 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.309 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.309 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.309 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.310 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.310 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.310 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.310 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.310 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.310 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.310 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.310 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.311 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.311 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.311 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.311 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.311 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.311 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.311 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.312 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.312 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.312 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.312 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.312 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.312 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.312 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.312 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.313 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.313 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.313 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.313 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.313 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.313 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.313 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.314 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.314 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.314 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.314 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.314 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.314 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.314 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.314 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.315 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.315 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.315 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.315 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.315 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.315 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.315 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.316 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.316 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.316 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.316 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.316 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.316 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.316 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.317 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.317 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.317 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.317 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.317 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.317 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.317 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.318 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.318 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.318 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.318 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.318 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.318 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.318 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.319 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.319 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.319 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.319 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.319 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.319 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.319 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.319 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.320 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.320 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.320 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.320 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.320 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.320 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.320 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.321 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.321 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.321 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.321 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.321 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.321 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.321 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.322 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.322 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.322 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.322 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.322 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.322 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.322 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.323 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.323 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.323 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.323 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.323 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.323 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.323 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.324 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.324 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.324 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.324 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.324 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.324 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.324 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.325 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.325 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.325 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.325 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.325 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.325 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.325 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.325 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.326 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.326 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.326 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.326 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.326 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.326 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.326 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.327 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.327 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.327 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.327 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.327 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.327 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.327 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.328 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.328 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.328 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.328 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.328 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.328 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.328 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.329 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.329 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.329 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.329 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.329 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.329 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.329 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.330 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.330 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.330 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.330 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.330 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.330 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.330 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.331 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.331 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.331 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.331 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.331 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.331 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.331 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.331 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.332 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.332 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.332 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.332 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.332 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.332 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.332 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.333 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.333 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.333 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.333 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.333 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.333 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.333 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.334 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.334 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.334 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.334 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.334 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.334 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.334 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.335 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.335 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.335 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.335 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.335 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.335 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.335 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.335 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.336 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.336 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.336 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.336 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.336 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.336 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.336 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.337 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.337 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.337 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.337 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.337 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.337 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.337 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.338 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.338 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.338 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.338 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.338 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.338 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.338 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.339 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.339 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.339 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.339 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.339 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.339 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.339 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.340 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.340 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.340 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.340 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.340 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.340 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.340 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.340 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.341 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.341 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.341 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.341 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.341 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.341 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.341 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.342 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.342 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.342 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.342 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.342 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.342 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.342 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.343 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.343 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.343 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.343 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.343 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.343 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.343 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.343 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.344 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.344 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.344 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.344 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.344 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.344 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.344 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.345 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.345 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.345 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.345 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.345 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.345 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.346 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.346 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.346 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.346 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.346 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.346 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.347 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.347 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.347 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.347 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.347 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.347 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.348 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.348 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.348 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.348 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.348 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.348 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.348 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.349 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.349 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.349 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.349 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.349 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.349 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.349 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.349 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.350 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.350 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.350 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.350 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.350 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.350 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.350 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.351 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.351 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.351 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.351 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.351 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.351 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.351 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.352 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.352 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.352 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.352 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.352 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.352 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.352 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.353 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.353 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.353 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.353 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.353 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.353 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.353 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.354 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.354 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.354 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.354 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.354 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.354 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.354 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.354 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.355 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.355 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.355 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.355 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.355 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.355 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.355 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.356 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.356 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.356 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.356 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.356 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.356 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.356 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.357 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.357 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.357 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.357 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.357 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.357 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.357 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.357 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.358 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.358 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.358 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.358 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.358 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.358 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.358 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.359 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.359 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.359 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.359 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.359 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.359 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.359 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.359 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.360 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.360 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.360 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.360 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.360 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.360 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.360 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.361 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.361 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.361 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.361 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.361 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.361 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.361 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.362 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.362 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.362 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.362 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.362 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.362 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.362 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.363 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.363 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.363 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.363 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.363 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.363 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.363 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.364 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.364 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.364 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.364 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.364 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.364 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.364 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.364 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.365 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.365 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.365 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.365 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.365 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.365 247447 WARNING oslo_config.cfg [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 26 07:51:23 np0005536586 nova_compute[247443]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 26 07:51:23 np0005536586 nova_compute[247443]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 26 07:51:23 np0005536586 nova_compute[247443]: and ``live_migration_inbound_addr`` respectively.
Nov 26 07:51:23 np0005536586 nova_compute[247443]: ).  Its value may be silently ignored in the future.#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.366 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.366 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.366 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.366 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.366 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.366 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.366 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.367 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.367 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.367 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.367 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.367 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.367 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.367 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.368 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.368 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.368 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.368 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.368 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rbd_secret_uuid        = f7d7fe93-41e5-51c4-b72d-63b38686102e log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.368 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.368 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.369 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.369 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.369 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.369 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.369 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.369 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.369 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.370 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.370 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.370 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.370 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.370 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.370 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.370 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.371 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.371 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.371 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.371 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.371 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.371 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.371 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.372 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.372 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.372 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.372 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.372 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.372 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.372 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.373 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.373 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.373 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.373 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.373 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.373 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.373 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.374 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.374 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.374 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.374 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.374 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.374 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.374 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.374 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.375 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.375 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.375 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.375 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.375 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.375 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.375 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.376 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.376 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.376 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.376 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.376 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.376 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.376 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.376 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.377 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.377 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.377 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.377 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.377 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.377 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.377 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.378 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.378 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.378 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.378 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.378 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.378 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.378 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.379 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.379 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.379 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.379 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.379 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.379 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.379 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.380 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.380 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.380 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.380 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.380 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.380 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.380 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.380 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.381 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.381 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.381 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.381 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.381 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.381 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.381 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.382 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.382 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.382 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.382 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.382 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.382 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.382 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.382 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.383 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.383 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.383 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.383 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.383 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.383 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.383 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.384 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.384 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.384 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.384 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.384 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.384 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.384 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.385 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.385 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.385 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.385 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.385 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.385 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.386 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.386 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.386 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.386 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.386 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.386 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.386 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.387 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.387 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.387 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.387 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.387 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.387 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.387 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.388 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.388 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.388 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.388 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.388 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.388 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.388 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.389 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.389 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.389 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.389 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.389 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.389 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.389 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.390 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.390 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.390 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.390 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.390 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.390 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.390 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.391 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.391 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.391 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.391 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.391 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.391 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.391 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.392 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.392 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.392 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.392 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.392 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.392 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.392 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.393 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.393 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.393 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.393 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.393 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.393 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.393 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.394 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.394 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.394 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.394 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.394 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.394 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.394 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.395 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.395 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.395 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.395 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.395 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.395 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.395 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.396 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.396 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.396 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.396 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.396 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.396 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.396 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.397 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.397 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.397 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.397 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.397 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.397 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.397 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.397 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.398 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.398 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.398 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.398 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.398 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.398 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.398 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.399 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.399 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.399 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.399 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.399 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.399 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.399 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.400 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.400 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.400 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.400 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.400 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.400 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.400 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.401 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.401 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.401 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.401 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.401 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.401 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.402 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.402 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.402 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.402 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.402 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.402 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.402 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.403 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.403 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.403 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.403 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.403 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.403 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.403 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.403 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.404 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.404 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.404 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.404 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.404 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.404 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.404 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.405 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.405 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.405 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.405 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.405 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.405 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.405 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.406 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.406 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.406 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.406 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.406 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.406 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.406 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.407 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.407 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.407 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.407 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.407 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.407 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.407 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.408 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.408 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.408 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.408 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.408 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.408 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.408 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.409 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.409 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.409 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.409 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.409 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.409 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.409 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.410 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.410 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.410 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.410 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.410 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.410 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.410 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.411 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.411 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.411 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.411 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.411 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.411 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.411 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.411 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.412 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.412 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.412 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.412 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.412 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.412 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.412 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.413 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.413 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.413 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.413 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.413 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.413 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.413 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.414 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.414 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.414 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.414 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.414 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.414 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.414 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.415 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.415 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.415 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.415 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.415 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.415 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.415 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.416 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.416 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.416 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.416 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.416 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.416 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.416 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.417 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.417 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.417 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.417 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.417 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.417 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.417 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.418 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.418 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.418 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.418 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.418 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.418 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.418 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.418 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.419 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.419 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.419 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.419 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.419 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.419 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.419 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.420 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.420 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.420 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.420 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.420 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.420 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.420 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.421 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.421 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.421 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.421 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.421 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.421 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.421 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.422 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.422 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.422 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.422 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.422 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.422 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.422 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.422 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.423 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.423 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.423 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.423 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.423 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.423 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.423 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.424 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.424 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.424 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.424 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.424 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.424 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.424 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.425 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.425 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.425 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.425 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.426 247447 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.434 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.435 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.435 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.435 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.446 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f685bbc9490> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.449 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f685bbc9490> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.450 247447 INFO nova.virt.libvirt.driver [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.453 247447 INFO nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Libvirt host capabilities <capabilities>
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <host>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <uuid>0a08c8a3-e2a8-4364-8947-610c4936d879</uuid>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <cpu>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <arch>x86_64</arch>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model>EPYC-Milan-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <vendor>AMD</vendor>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <microcode version='167776725'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <signature family='25' model='1' stepping='1'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <maxphysaddr mode='emulate' bits='48'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='x2apic'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='tsc-deadline'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='osxsave'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='hypervisor'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='tsc_adjust'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='ospke'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='vaes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='vpclmulqdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='spec-ctrl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='stibp'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='arch-capabilities'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='ssbd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='cmp_legacy'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='virt-ssbd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='lbrv'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='tsc-scale'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='vmcb-clean'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='pause-filter'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='pfthreshold'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='v-vmsave-vmload'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='vgif'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='rdctl-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='skip-l1dfl-vmentry'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='mds-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature name='pschange-mc-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <pages unit='KiB' size='4'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <pages unit='KiB' size='2048'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <pages unit='KiB' size='1048576'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </cpu>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <power_management>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <suspend_mem/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </power_management>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <iommu support='no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <migration_features>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <live/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <uri_transports>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <uri_transport>tcp</uri_transport>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <uri_transport>rdma</uri_transport>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </uri_transports>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </migration_features>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <topology>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <cells num='1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <cell id='0'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:          <memory unit='KiB'>7865364</memory>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:          <pages unit='KiB' size='4'>1966341</pages>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:          <pages unit='KiB' size='2048'>0</pages>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:          <distances>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:            <sibling id='0' value='10'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:          </distances>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:          <cpus num='4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:          </cpus>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        </cell>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </cells>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </topology>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <cache>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </cache>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <secmodel>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model>selinux</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <doi>0</doi>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </secmodel>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <secmodel>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model>dac</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <doi>0</doi>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </secmodel>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </host>
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <guest>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <os_type>hvm</os_type>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <arch name='i686'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <wordsize>32</wordsize>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <domain type='qemu'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <domain type='kvm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </arch>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <features>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <pae/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <nonpae/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <acpi default='on' toggle='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <apic default='on' toggle='no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <cpuselection/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <deviceboot/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <disksnapshot default='on' toggle='no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <externalSnapshot/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </features>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </guest>
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <guest>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <os_type>hvm</os_type>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <arch name='x86_64'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <wordsize>64</wordsize>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <domain type='qemu'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <domain type='kvm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </arch>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <features>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <acpi default='on' toggle='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <apic default='on' toggle='no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <cpuselection/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <deviceboot/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <disksnapshot default='on' toggle='no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <externalSnapshot/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </features>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </guest>
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 
Nov 26 07:51:23 np0005536586 nova_compute[247443]: </capabilities>
Nov 26 07:51:23 np0005536586 nova_compute[247443]: #033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.462 247447 WARNING nova.virt.libvirt.driver [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.462 247447 DEBUG nova.virt.libvirt.volume.mount [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.466 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.488 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 26 07:51:23 np0005536586 nova_compute[247443]: <domainCapabilities>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <domain>kvm</domain>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <arch>i686</arch>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <vcpu max='240'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <iothreads supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <os supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <enum name='firmware'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <loader supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>rom</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pflash</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='readonly'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>yes</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>no</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='secure'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>no</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </loader>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </os>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <cpu>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <mode name='host-passthrough' supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='hostPassthroughMigratable'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>on</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>off</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </mode>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <mode name='maximum' supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='maximumMigratable'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>on</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>off</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </mode>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <mode name='host-model' supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model fallback='forbid'>EPYC-Milan</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <vendor>AMD</vendor>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <maxphysaddr mode='passthrough' limit='48'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='x2apic'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='hypervisor'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='vaes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='vpclmulqdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='stibp'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='ssbd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='overflow-recov'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='succor'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='lbrv'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='tsc-scale'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='flushbyasid'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='pause-filter'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='pfthreshold'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='v-vmsave-vmload'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='vgif'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </mode>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <mode name='custom' supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Broadwell'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Broadwell-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Broadwell-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Broadwell-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cooperlake'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cooperlake-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cooperlake-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Denverton'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mpx'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Denverton-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mpx'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='EPYC-Genoa'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amd-psfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='auto-ibrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='no-nested-data-bp'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='null-sel-clr-base'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='stibp-always-on'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amd-psfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='auto-ibrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='no-nested-data-bp'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='null-sel-clr-base'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='stibp-always-on'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='EPYC-Milan-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amd-psfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='no-nested-data-bp'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='null-sel-clr-base'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='stibp-always-on'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='GraniteRapids'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='prefetchiti'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='GraniteRapids-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='prefetchiti'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='GraniteRapids-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx10'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx10-128'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx10-256'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx10-512'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='prefetchiti'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Haswell'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Haswell-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Haswell-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Haswell-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v5'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v6'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v7'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='KnightsMill'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-4fmaps'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-4vnniw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512er'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512pf'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='KnightsMill-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-4fmaps'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-4vnniw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512er'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512pf'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Opteron_G4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fma4'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xop'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Opteron_G4-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fma4'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xop'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Opteron_G5'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fma4'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tbm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xop'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Opteron_G5-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fma4'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tbm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xop'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SapphireRapids'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SapphireRapids-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SapphireRapids-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SapphireRapids-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SierraForest'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-ne-convert'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cmpccxadd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SierraForest-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-ne-convert'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cmpccxadd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Client'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Client-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Client-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v5'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='core-capability'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mpx'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='split-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='core-capability'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mpx'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='split-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='core-capability'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='split-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='core-capability'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='split-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge-v4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='athlon'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnow'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnowext'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='athlon-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnow'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnowext'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='core2duo'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='core2duo-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='coreduo'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='coreduo-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='n270'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='n270-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='phenom'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnow'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnowext'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='phenom-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnow'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnowext'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </mode>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </cpu>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <memoryBacking supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <enum name='sourceType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>file</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>anonymous</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>memfd</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </memoryBacking>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <devices>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <disk supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='diskDevice'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>disk</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>cdrom</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>floppy</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>lun</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='bus'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>ide</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>fdc</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>scsi</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>usb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>sata</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio-transitional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio-non-transitional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </disk>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <graphics supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vnc</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>egl-headless</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>dbus</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </graphics>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <video supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='modelType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vga</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>cirrus</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>none</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>bochs</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>ramfb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </video>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <hostdev supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='mode'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>subsystem</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='startupPolicy'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>default</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>mandatory</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>requisite</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>optional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='subsysType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>usb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pci</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>scsi</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='capsType'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='pciBackend'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </hostdev>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <rng supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio-transitional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio-non-transitional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendModel'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>random</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>egd</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>builtin</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </rng>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <filesystem supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='driverType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>path</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>handle</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtiofs</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </filesystem>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <tpm supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tpm-tis</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tpm-crb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendModel'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>emulator</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>external</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendVersion'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>2.0</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </tpm>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <redirdev supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='bus'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>usb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </redirdev>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <channel supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pty</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>unix</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </channel>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <crypto supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>qemu</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendModel'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>builtin</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </crypto>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <interface supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>default</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>passt</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </interface>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <panic supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>isa</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>hyperv</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </panic>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <console supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>null</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vc</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pty</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>dev</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>file</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pipe</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>stdio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>udp</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tcp</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>unix</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>qemu-vdagent</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>dbus</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </console>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </devices>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <features>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <gic supported='no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <vmcoreinfo supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <genid supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <backingStoreInput supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <backup supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <async-teardown supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <ps2 supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <sev supported='no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <sgx supported='no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <hyperv supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='features'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>relaxed</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vapic</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>spinlocks</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vpindex</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>runtime</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>synic</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>stimer</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>reset</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vendor_id</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>frequencies</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>reenlightenment</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tlbflush</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>ipi</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>avic</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>emsr_bitmap</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>xmm_input</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <defaults>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <spinlocks>4095</spinlocks>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <stimer_direct>on</stimer_direct>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </defaults>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </hyperv>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <launchSecurity supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='sectype'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tdx</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </launchSecurity>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </features>
Nov 26 07:51:23 np0005536586 nova_compute[247443]: </domainCapabilities>
Nov 26 07:51:23 np0005536586 nova_compute[247443]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.503 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 26 07:51:23 np0005536586 nova_compute[247443]: <domainCapabilities>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <domain>kvm</domain>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <arch>i686</arch>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <vcpu max='4096'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <iothreads supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <os supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <enum name='firmware'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <loader supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>rom</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pflash</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='readonly'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>yes</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>no</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='secure'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>no</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </loader>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </os>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <cpu>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <mode name='host-passthrough' supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='hostPassthroughMigratable'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>on</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>off</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </mode>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <mode name='maximum' supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='maximumMigratable'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>on</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>off</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </mode>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <mode name='host-model' supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model fallback='forbid'>EPYC-Milan</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <vendor>AMD</vendor>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <maxphysaddr mode='passthrough' limit='48'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='x2apic'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='hypervisor'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='vaes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='vpclmulqdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='stibp'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='ssbd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='overflow-recov'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='succor'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='lbrv'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='tsc-scale'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='flushbyasid'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='pause-filter'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='pfthreshold'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='v-vmsave-vmload'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='vgif'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </mode>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <mode name='custom' supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Broadwell'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Broadwell-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Broadwell-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Broadwell-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cooperlake'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cooperlake-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cooperlake-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Denverton'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mpx'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Denverton-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mpx'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='EPYC-Genoa'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amd-psfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='auto-ibrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='no-nested-data-bp'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='null-sel-clr-base'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='stibp-always-on'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amd-psfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='auto-ibrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='no-nested-data-bp'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='null-sel-clr-base'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='stibp-always-on'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='EPYC-Milan-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amd-psfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='no-nested-data-bp'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='null-sel-clr-base'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='stibp-always-on'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='GraniteRapids'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='prefetchiti'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='GraniteRapids-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='prefetchiti'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='GraniteRapids-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx10'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx10-128'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx10-256'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx10-512'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='prefetchiti'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Haswell'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Haswell-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Haswell-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Haswell-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v5'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v6'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v7'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='KnightsMill'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-4fmaps'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-4vnniw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512er'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512pf'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='KnightsMill-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-4fmaps'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-4vnniw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512er'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512pf'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Opteron_G4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fma4'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xop'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Opteron_G4-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fma4'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xop'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Opteron_G5'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fma4'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tbm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xop'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Opteron_G5-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fma4'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tbm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xop'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SapphireRapids'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SapphireRapids-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SapphireRapids-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SapphireRapids-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SierraForest'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-ne-convert'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cmpccxadd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SierraForest-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-ne-convert'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cmpccxadd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Client'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Client-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Client-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v5'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='core-capability'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mpx'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='split-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='core-capability'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mpx'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='split-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='core-capability'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='split-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='core-capability'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='split-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge-v4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='athlon'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnow'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnowext'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='athlon-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnow'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnowext'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='core2duo'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='core2duo-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='coreduo'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='coreduo-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='n270'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='n270-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='phenom'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnow'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnowext'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='phenom-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnow'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnowext'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </mode>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </cpu>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <memoryBacking supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <enum name='sourceType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>file</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>anonymous</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>memfd</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </memoryBacking>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <devices>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <disk supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='diskDevice'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>disk</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>cdrom</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>floppy</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>lun</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='bus'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>fdc</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>scsi</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>usb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>sata</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio-transitional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio-non-transitional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </disk>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <graphics supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vnc</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>egl-headless</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>dbus</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </graphics>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <video supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='modelType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vga</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>cirrus</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>none</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>bochs</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>ramfb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </video>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <hostdev supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='mode'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>subsystem</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='startupPolicy'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>default</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>mandatory</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>requisite</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>optional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='subsysType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>usb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pci</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>scsi</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='capsType'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='pciBackend'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </hostdev>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <rng supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio-transitional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio-non-transitional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendModel'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>random</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>egd</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>builtin</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </rng>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <filesystem supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='driverType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>path</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>handle</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtiofs</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </filesystem>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <tpm supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tpm-tis</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tpm-crb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendModel'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>emulator</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>external</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendVersion'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>2.0</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </tpm>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <redirdev supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='bus'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>usb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </redirdev>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <channel supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pty</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>unix</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </channel>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <crypto supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>qemu</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendModel'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>builtin</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </crypto>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <interface supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>default</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>passt</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </interface>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <panic supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>isa</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>hyperv</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </panic>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <console supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>null</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vc</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pty</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>dev</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>file</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pipe</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>stdio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>udp</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tcp</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>unix</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>qemu-vdagent</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>dbus</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </console>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </devices>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <features>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <gic supported='no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <vmcoreinfo supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <genid supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <backingStoreInput supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <backup supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <async-teardown supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <ps2 supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <sev supported='no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <sgx supported='no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <hyperv supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='features'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>relaxed</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vapic</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>spinlocks</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vpindex</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>runtime</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>synic</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>stimer</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>reset</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vendor_id</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>frequencies</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>reenlightenment</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tlbflush</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>ipi</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>avic</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>emsr_bitmap</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>xmm_input</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <defaults>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <spinlocks>4095</spinlocks>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <stimer_direct>on</stimer_direct>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </defaults>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </hyperv>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <launchSecurity supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='sectype'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tdx</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </launchSecurity>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </features>
Nov 26 07:51:23 np0005536586 nova_compute[247443]: </domainCapabilities>
Nov 26 07:51:23 np0005536586 nova_compute[247443]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.504 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.508 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 26 07:51:23 np0005536586 nova_compute[247443]: <domainCapabilities>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <domain>kvm</domain>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <arch>x86_64</arch>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <vcpu max='240'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <iothreads supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <os supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <enum name='firmware'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <loader supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>rom</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pflash</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='readonly'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>yes</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>no</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='secure'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>no</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </loader>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </os>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <cpu>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <mode name='host-passthrough' supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='hostPassthroughMigratable'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>on</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>off</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </mode>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <mode name='maximum' supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='maximumMigratable'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>on</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>off</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </mode>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <mode name='host-model' supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model fallback='forbid'>EPYC-Milan</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <vendor>AMD</vendor>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <maxphysaddr mode='passthrough' limit='48'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='x2apic'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='hypervisor'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='vaes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='vpclmulqdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='stibp'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='ssbd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='overflow-recov'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='succor'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='lbrv'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='tsc-scale'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='flushbyasid'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='pause-filter'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='pfthreshold'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='v-vmsave-vmload'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='vgif'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </mode>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <mode name='custom' supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Broadwell'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Broadwell-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Broadwell-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Broadwell-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cooperlake'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cooperlake-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cooperlake-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Denverton'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mpx'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Denverton-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mpx'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='EPYC-Genoa'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amd-psfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='auto-ibrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='no-nested-data-bp'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='null-sel-clr-base'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='stibp-always-on'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amd-psfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='auto-ibrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='no-nested-data-bp'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='null-sel-clr-base'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='stibp-always-on'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='EPYC-Milan-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amd-psfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='no-nested-data-bp'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='null-sel-clr-base'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='stibp-always-on'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='GraniteRapids'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='prefetchiti'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='GraniteRapids-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='prefetchiti'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='GraniteRapids-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx10'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx10-128'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx10-256'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx10-512'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='prefetchiti'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Haswell'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Haswell-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Haswell-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Haswell-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v5'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v6'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v7'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='KnightsMill'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-4fmaps'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-4vnniw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512er'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512pf'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='KnightsMill-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-4fmaps'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-4vnniw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512er'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512pf'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Opteron_G4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fma4'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xop'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Opteron_G4-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fma4'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xop'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Opteron_G5'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fma4'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tbm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xop'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Opteron_G5-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fma4'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tbm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xop'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SapphireRapids'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SapphireRapids-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SapphireRapids-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SapphireRapids-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SierraForest'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-ne-convert'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cmpccxadd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SierraForest-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-ne-convert'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cmpccxadd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Client'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Client-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Client-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v5'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='core-capability'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mpx'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='split-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='core-capability'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mpx'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='split-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='core-capability'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='split-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='core-capability'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='split-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge-v4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='athlon'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnow'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnowext'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='athlon-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnow'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnowext'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='core2duo'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='core2duo-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='coreduo'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='coreduo-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='n270'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='n270-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='phenom'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnow'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnowext'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='phenom-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnow'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnowext'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </mode>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </cpu>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <memoryBacking supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <enum name='sourceType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>file</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>anonymous</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>memfd</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </memoryBacking>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <devices>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <disk supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='diskDevice'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>disk</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>cdrom</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>floppy</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>lun</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='bus'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>ide</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>fdc</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>scsi</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>usb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>sata</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio-transitional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio-non-transitional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </disk>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <graphics supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vnc</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>egl-headless</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>dbus</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </graphics>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <video supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='modelType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vga</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>cirrus</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>none</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>bochs</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>ramfb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </video>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <hostdev supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='mode'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>subsystem</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='startupPolicy'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>default</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>mandatory</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>requisite</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>optional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='subsysType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>usb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pci</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>scsi</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='capsType'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='pciBackend'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </hostdev>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <rng supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio-transitional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio-non-transitional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendModel'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>random</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>egd</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>builtin</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </rng>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <filesystem supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='driverType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>path</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>handle</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtiofs</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </filesystem>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <tpm supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tpm-tis</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tpm-crb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendModel'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>emulator</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>external</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendVersion'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>2.0</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </tpm>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <redirdev supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='bus'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>usb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </redirdev>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <channel supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pty</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>unix</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </channel>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <crypto supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>qemu</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendModel'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>builtin</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </crypto>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <interface supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>default</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>passt</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </interface>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <panic supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>isa</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>hyperv</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </panic>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <console supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>null</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vc</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pty</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>dev</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>file</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pipe</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>stdio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>udp</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tcp</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>unix</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>qemu-vdagent</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>dbus</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </console>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </devices>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <features>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <gic supported='no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <vmcoreinfo supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <genid supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <backingStoreInput supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <backup supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <async-teardown supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <ps2 supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <sev supported='no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <sgx supported='no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <hyperv supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='features'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>relaxed</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vapic</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>spinlocks</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vpindex</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>runtime</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>synic</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>stimer</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>reset</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vendor_id</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>frequencies</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>reenlightenment</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tlbflush</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>ipi</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>avic</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>emsr_bitmap</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>xmm_input</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <defaults>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <spinlocks>4095</spinlocks>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <stimer_direct>on</stimer_direct>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </defaults>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </hyperv>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <launchSecurity supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='sectype'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tdx</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </launchSecurity>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </features>
Nov 26 07:51:23 np0005536586 nova_compute[247443]: </domainCapabilities>
Nov 26 07:51:23 np0005536586 nova_compute[247443]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.549 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 26 07:51:23 np0005536586 nova_compute[247443]: <domainCapabilities>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <domain>kvm</domain>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <arch>x86_64</arch>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <vcpu max='4096'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <iothreads supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <os supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <enum name='firmware'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>efi</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <loader supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>rom</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pflash</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='readonly'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>yes</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>no</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='secure'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>yes</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>no</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </loader>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </os>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <cpu>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <mode name='host-passthrough' supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='hostPassthroughMigratable'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>on</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>off</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </mode>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <mode name='maximum' supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='maximumMigratable'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>on</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>off</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </mode>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <mode name='host-model' supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model fallback='forbid'>EPYC-Milan</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <vendor>AMD</vendor>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <maxphysaddr mode='passthrough' limit='48'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='x2apic'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='hypervisor'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='vaes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='vpclmulqdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='stibp'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='ssbd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='overflow-recov'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='succor'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='lbrv'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='tsc-scale'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='flushbyasid'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='pause-filter'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='pfthreshold'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='v-vmsave-vmload'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='vgif'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </mode>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <mode name='custom' supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Broadwell'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Broadwell-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Broadwell-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Broadwell-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cooperlake'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cooperlake-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Cooperlake-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Denverton'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mpx'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Denverton-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mpx'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='EPYC-Genoa'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amd-psfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='auto-ibrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='no-nested-data-bp'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='null-sel-clr-base'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='stibp-always-on'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amd-psfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='auto-ibrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='no-nested-data-bp'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='null-sel-clr-base'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='stibp-always-on'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='EPYC-Milan-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amd-psfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='no-nested-data-bp'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='null-sel-clr-base'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='stibp-always-on'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='GraniteRapids'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='prefetchiti'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='GraniteRapids-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='prefetchiti'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='GraniteRapids-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx10'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx10-128'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx10-256'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx10-512'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='prefetchiti'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Haswell'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Haswell-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Haswell-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Haswell-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v5'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v6'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Icelake-Server-v7'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='KnightsMill'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-4fmaps'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-4vnniw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512er'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512pf'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='KnightsMill-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-4fmaps'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-4vnniw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512er'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512pf'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Opteron_G4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fma4'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xop'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Opteron_G4-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fma4'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xop'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Opteron_G5'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fma4'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tbm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xop'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Opteron_G5-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fma4'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tbm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xop'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SapphireRapids'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SapphireRapids-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SapphireRapids-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SapphireRapids-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='amx-tile'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-bf16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-fp16'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512-vpopcntdq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bitalg'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vbmi2'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrc'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fzrm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='la57'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='taa-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='tsx-ldtrk'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='xfd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SierraForest'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-ne-convert'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cmpccxadd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='SierraForest-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-ifma'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-ne-convert'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx-vnni-int8'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='bus-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cmpccxadd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fbsdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='fsrs'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ibrs-all'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mcdt-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='pbrsb-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='psdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='sbdr-ssdp-no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='serialize'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Client'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Client-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Client-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='hle'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='rtm'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Skylake-Server-v5'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512bw'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512cd'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512dq'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512f'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='avx512vl'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='core-capability'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mpx'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='split-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='core-capability'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='mpx'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='split-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge-v2'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='core-capability'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='split-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge-v3'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='core-capability'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='split-lock-detect'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='Snowridge-v4'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='cldemote'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='gfni'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdir64b'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='movdiri'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='athlon'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnow'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnowext'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='athlon-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnow'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnowext'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='core2duo'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='core2duo-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='coreduo'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='coreduo-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='n270'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='n270-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='ss'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='phenom'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnow'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnowext'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <blockers model='phenom-v1'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnow'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <feature name='3dnowext'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </blockers>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </mode>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </cpu>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <memoryBacking supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <enum name='sourceType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>file</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>anonymous</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <value>memfd</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </memoryBacking>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <devices>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <disk supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='diskDevice'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>disk</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>cdrom</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>floppy</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>lun</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='bus'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>fdc</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>scsi</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>usb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>sata</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio-transitional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio-non-transitional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </disk>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <graphics supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vnc</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>egl-headless</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>dbus</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </graphics>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <video supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='modelType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vga</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>cirrus</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>none</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>bochs</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>ramfb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </video>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <hostdev supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='mode'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>subsystem</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='startupPolicy'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>default</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>mandatory</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>requisite</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>optional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='subsysType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>usb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pci</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>scsi</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='capsType'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='pciBackend'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </hostdev>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <rng supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio-transitional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtio-non-transitional</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendModel'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>random</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>egd</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>builtin</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </rng>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <filesystem supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='driverType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>path</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>handle</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>virtiofs</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </filesystem>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <tpm supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tpm-tis</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tpm-crb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendModel'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>emulator</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>external</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendVersion'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>2.0</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </tpm>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <redirdev supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='bus'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>usb</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </redirdev>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <channel supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pty</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>unix</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </channel>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <crypto supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>qemu</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendModel'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>builtin</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </crypto>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <interface supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='backendType'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>default</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>passt</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </interface>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <panic supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='model'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>isa</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>hyperv</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </panic>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <console supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='type'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>null</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vc</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pty</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>dev</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>file</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>pipe</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>stdio</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>udp</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tcp</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>unix</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>qemu-vdagent</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>dbus</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </console>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </devices>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  <features>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <gic supported='no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <vmcoreinfo supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <genid supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <backingStoreInput supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <backup supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <async-teardown supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <ps2 supported='yes'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <sev supported='no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <sgx supported='no'/>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <hyperv supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='features'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>relaxed</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vapic</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>spinlocks</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vpindex</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>runtime</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>synic</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>stimer</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>reset</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>vendor_id</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>frequencies</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>reenlightenment</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tlbflush</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>ipi</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>avic</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>emsr_bitmap</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>xmm_input</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <defaults>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <spinlocks>4095</spinlocks>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <stimer_direct>on</stimer_direct>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </defaults>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </hyperv>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    <launchSecurity supported='yes'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      <enum name='sectype'>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:        <value>tdx</value>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:      </enum>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:    </launchSecurity>
Nov 26 07:51:23 np0005536586 nova_compute[247443]:  </features>
Nov 26 07:51:23 np0005536586 nova_compute[247443]: </domainCapabilities>
Nov 26 07:51:23 np0005536586 nova_compute[247443]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.600 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.601 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.601 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.601 247447 INFO nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Secure Boot support detected#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.602 247447 INFO nova.virt.libvirt.driver [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.602 247447 INFO nova.virt.libvirt.driver [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.610 247447 DEBUG nova.virt.libvirt.driver [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.634 247447 INFO nova.virt.node [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Determined node identity b5f91a62-c356-4895-a9c1-523d85f8751b from /var/lib/nova/compute_id#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.644 247447 WARNING nova.compute.manager [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Compute nodes ['b5f91a62-c356-4895-a9c1-523d85f8751b'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.677 247447 INFO nova.compute.manager [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.697 247447 WARNING nova.compute.manager [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.697 247447 DEBUG oslo_concurrency.lockutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.697 247447 DEBUG oslo_concurrency.lockutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.698 247447 DEBUG oslo_concurrency.lockutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.698 247447 DEBUG nova.compute.resource_tracker [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 07:51:23 np0005536586 nova_compute[247443]: 2025-11-26 12:51:23.698 247447 DEBUG oslo_concurrency.processutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 07:51:24 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 07:51:24 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1755524487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 07:51:24 np0005536586 nova_compute[247443]: 2025-11-26 12:51:24.041 247447 DEBUG oslo_concurrency.processutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 07:51:24 np0005536586 systemd[1]: Starting libvirt nodedev daemon...
Nov 26 07:51:24 np0005536586 systemd[1]: Started libvirt nodedev daemon.
Nov 26 07:51:24 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:24 np0005536586 nova_compute[247443]: 2025-11-26 12:51:24.434 247447 WARNING nova.virt.libvirt.driver [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 07:51:24 np0005536586 nova_compute[247443]: 2025-11-26 12:51:24.435 247447 DEBUG nova.compute.resource_tracker [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5210MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 07:51:24 np0005536586 nova_compute[247443]: 2025-11-26 12:51:24.435 247447 DEBUG oslo_concurrency.lockutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:51:24 np0005536586 nova_compute[247443]: 2025-11-26 12:51:24.436 247447 DEBUG oslo_concurrency.lockutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:51:24 np0005536586 nova_compute[247443]: 2025-11-26 12:51:24.447 247447 WARNING nova.compute.resource_tracker [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] No compute node record for compute-0.ctlplane.example.com:b5f91a62-c356-4895-a9c1-523d85f8751b: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host b5f91a62-c356-4895-a9c1-523d85f8751b could not be found.#033[00m
Nov 26 07:51:24 np0005536586 nova_compute[247443]: 2025-11-26 12:51:24.460 247447 INFO nova.compute.resource_tracker [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: b5f91a62-c356-4895-a9c1-523d85f8751b#033[00m
Nov 26 07:51:24 np0005536586 nova_compute[247443]: 2025-11-26 12:51:24.504 247447 DEBUG nova.compute.resource_tracker [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 07:51:24 np0005536586 nova_compute[247443]: 2025-11-26 12:51:24.504 247447 DEBUG nova.compute.resource_tracker [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 07:51:25 np0005536586 nova_compute[247443]: 2025-11-26 12:51:25.238 247447 INFO nova.scheduler.client.report [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] [req-53ab3889-e68c-4a11-9d00-87662e78ad43] Created resource provider record via placement API for resource provider with UUID b5f91a62-c356-4895-a9c1-523d85f8751b and name compute-0.ctlplane.example.com.#033[00m
Nov 26 07:51:25 np0005536586 nova_compute[247443]: 2025-11-26 12:51:25.557 247447 DEBUG oslo_concurrency.processutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 07:51:25 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 07:51:25 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3933894025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 07:51:25 np0005536586 nova_compute[247443]: 2025-11-26 12:51:25.882 247447 DEBUG oslo_concurrency.processutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 07:51:25 np0005536586 nova_compute[247443]: 2025-11-26 12:51:25.886 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 26 07:51:25 np0005536586 nova_compute[247443]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 26 07:51:25 np0005536586 nova_compute[247443]: 2025-11-26 12:51:25.887 247447 INFO nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 26 07:51:25 np0005536586 nova_compute[247443]: 2025-11-26 12:51:25.888 247447 DEBUG nova.compute.provider_tree [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Updating inventory in ProviderTree for provider b5f91a62-c356-4895-a9c1-523d85f8751b with inventory: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 07:51:25 np0005536586 nova_compute[247443]: 2025-11-26 12:51:25.888 247447 DEBUG nova.virt.libvirt.driver [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 07:51:25 np0005536586 nova_compute[247443]: 2025-11-26 12:51:25.924 247447 DEBUG nova.scheduler.client.report [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Updated inventory for provider b5f91a62-c356-4895-a9c1-523d85f8751b with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 26 07:51:25 np0005536586 nova_compute[247443]: 2025-11-26 12:51:25.924 247447 DEBUG nova.compute.provider_tree [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Updating resource provider b5f91a62-c356-4895-a9c1-523d85f8751b generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 26 07:51:25 np0005536586 nova_compute[247443]: 2025-11-26 12:51:25.925 247447 DEBUG nova.compute.provider_tree [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Updating inventory in ProviderTree for provider b5f91a62-c356-4895-a9c1-523d85f8751b with inventory: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 07:51:25 np0005536586 nova_compute[247443]: 2025-11-26 12:51:25.986 247447 DEBUG nova.compute.provider_tree [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Updating resource provider b5f91a62-c356-4895-a9c1-523d85f8751b generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 26 07:51:26 np0005536586 nova_compute[247443]: 2025-11-26 12:51:26.001 247447 DEBUG nova.compute.resource_tracker [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 07:51:26 np0005536586 nova_compute[247443]: 2025-11-26 12:51:26.002 247447 DEBUG oslo_concurrency.lockutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:51:26 np0005536586 nova_compute[247443]: 2025-11-26 12:51:26.002 247447 DEBUG nova.service [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 26 07:51:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:51:26 np0005536586 nova_compute[247443]: 2025-11-26 12:51:26.042 247447 DEBUG nova.service [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 26 07:51:26 np0005536586 nova_compute[247443]: 2025-11-26 12:51:26.042 247447 DEBUG nova.servicegroup.drivers.db [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 26 07:51:26 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:28 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:30 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.236046) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161491236082, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2044, "num_deletes": 251, "total_data_size": 3484318, "memory_usage": 3542968, "flush_reason": "Manual Compaction"}
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161491246314, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3398356, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9678, "largest_seqno": 11721, "table_properties": {"data_size": 3389143, "index_size": 5835, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17916, "raw_average_key_size": 19, "raw_value_size": 3370788, "raw_average_value_size": 3663, "num_data_blocks": 265, "num_entries": 920, "num_filter_entries": 920, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764161263, "oldest_key_time": 1764161263, "file_creation_time": 1764161491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 10304 microseconds, and 8603 cpu microseconds.
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.246351) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3398356 bytes OK
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.246370) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.246792) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.246804) EVENT_LOG_v1 {"time_micros": 1764161491246800, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.246819) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3475783, prev total WAL file size 3475783, number of live WAL files 2.
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.253087) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3318KB)], [26(6078KB)]
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161491253127, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9623075, "oldest_snapshot_seqno": -1}
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3687 keys, 8023330 bytes, temperature: kUnknown
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161491271152, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8023330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7994775, "index_size": 18205, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88547, "raw_average_key_size": 24, "raw_value_size": 7924360, "raw_average_value_size": 2149, "num_data_blocks": 791, "num_entries": 3687, "num_filter_entries": 3687, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160613, "oldest_key_time": 0, "file_creation_time": 1764161491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.271385) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8023330 bytes
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.271844) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 531.8 rd, 443.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 5.9 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(5.2) write-amplify(2.4) OK, records in: 4201, records dropped: 514 output_compression: NoCompression
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.271860) EVENT_LOG_v1 {"time_micros": 1764161491271852, "job": 10, "event": "compaction_finished", "compaction_time_micros": 18094, "compaction_time_cpu_micros": 14920, "output_level": 6, "num_output_files": 1, "total_output_size": 8023330, "num_input_records": 4201, "num_output_records": 3687, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161491272326, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161491273014, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.253013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.273052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.273055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.273056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.273057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:51:31 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.273059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:51:32 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:34 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:51:35
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'images', '.mgr']
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:51:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:51:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:51:36 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:38 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:39 np0005536586 podman[247794]: 2025-11-26 12:51:39.875336094 +0000 UTC m=+0.041402353 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 07:51:39 np0005536586 podman[247795]: 2025-11-26 12:51:39.879551951 +0000 UTC m=+0.045673184 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 07:51:40 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:51:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 07:51:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/716650850' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 07:51:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 07:51:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/716650850' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 07:51:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 07:51:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2032296408' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 07:51:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 07:51:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2032296408' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 07:51:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 07:51:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3568356903' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 07:51:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 07:51:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3568356903' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 07:51:42 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:44 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:51:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 07:51:45 np0005536586 podman[247826]: 2025-11-26 12:51:45.885250679 +0000 UTC m=+0.053276596 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 07:51:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:51:46 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:48 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:49 np0005536586 nova_compute[247443]: 2025-11-26 12:51:49.045 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:51:49 np0005536586 nova_compute[247443]: 2025-11-26 12:51:49.061 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:51:50 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:51:52 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:54 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:51:56 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:51:58 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:00 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:52:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:52:01.727 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:52:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:52:01.727 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:52:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:52:01.727 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:52:02 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:04 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 26 07:52:05 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3603071237' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 26 07:52:05 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14335 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 26 07:52:05 np0005536586 ceph-mgr[75236]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 26 07:52:05 np0005536586 ceph-mgr[75236]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 26 07:52:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:52:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:52:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:52:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:52:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:52:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:52:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:52:06 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:08 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:10 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:10 np0005536586 podman[247849]: 2025-11-26 12:52:10.885318484 +0000 UTC m=+0.043832652 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:52:10 np0005536586 podman[247850]: 2025-11-26 12:52:10.905794035 +0000 UTC m=+0.064462094 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Nov 26 07:52:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:52:12 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:14 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:52:15 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:52:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:52:15 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:52:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:52:15 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:52:15 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 3cdb41f1-c070-4983-9f29-bbe21b71db68 does not exist
Nov 26 07:52:15 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 1dff752f-0580-40c3-892a-2de40f805ecd does not exist
Nov 26 07:52:15 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 1270e570-7e15-493b-963c-fa97794a2a6a does not exist
Nov 26 07:52:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:52:15 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:52:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:52:15 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:52:15 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:52:15 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:52:16 np0005536586 podman[248144]: 2025-11-26 12:52:16.020708559 +0000 UTC m=+0.027436318 container create 5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_visvesvaraya, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:52:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:52:16 np0005536586 systemd[1]: Started libpod-conmon-5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953.scope.
Nov 26 07:52:16 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:52:16 np0005536586 podman[248144]: 2025-11-26 12:52:16.07147253 +0000 UTC m=+0.078200310 container init 5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_visvesvaraya, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:52:16 np0005536586 podman[248144]: 2025-11-26 12:52:16.076807407 +0000 UTC m=+0.083535167 container start 5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 26 07:52:16 np0005536586 podman[248144]: 2025-11-26 12:52:16.078132518 +0000 UTC m=+0.084860278 container attach 5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_visvesvaraya, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 07:52:16 np0005536586 systemd[1]: libpod-5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953.scope: Deactivated successfully.
Nov 26 07:52:16 np0005536586 elated_visvesvaraya[248159]: 167 167
Nov 26 07:52:16 np0005536586 conmon[248159]: conmon 5fec31a6ed88ffc68bbe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953.scope/container/memory.events
Nov 26 07:52:16 np0005536586 podman[248144]: 2025-11-26 12:52:16.081574745 +0000 UTC m=+0.088302504 container died 5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_visvesvaraya, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:52:16 np0005536586 systemd[1]: var-lib-containers-storage-overlay-869cb5fe09c88caeef62029997f7551edb1fcb2ebd2f8b46011c5416d1d705be-merged.mount: Deactivated successfully.
Nov 26 07:52:16 np0005536586 podman[248144]: 2025-11-26 12:52:16.103201979 +0000 UTC m=+0.109929739 container remove 5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_visvesvaraya, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:52:16 np0005536586 podman[248144]: 2025-11-26 12:52:16.008890055 +0000 UTC m=+0.015617825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:52:16 np0005536586 systemd[1]: libpod-conmon-5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953.scope: Deactivated successfully.
Nov 26 07:52:16 np0005536586 podman[248155]: 2025-11-26 12:52:16.11940094 +0000 UTC m=+0.074730601 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 26 07:52:16 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:16 np0005536586 podman[248203]: 2025-11-26 12:52:16.227555386 +0000 UTC m=+0.028470801 container create 20f0180f96303c3a5bd5e24c4d3f3da5527083e91676b6fe91f3123c550590a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 07:52:16 np0005536586 systemd[1]: Started libpod-conmon-20f0180f96303c3a5bd5e24c4d3f3da5527083e91676b6fe91f3123c550590a9.scope.
Nov 26 07:52:16 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:52:16 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d509d298b120b10167783dbcfe145eb23ee8fc39799c66cd8870ba8540311b67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:52:16 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d509d298b120b10167783dbcfe145eb23ee8fc39799c66cd8870ba8540311b67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:52:16 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d509d298b120b10167783dbcfe145eb23ee8fc39799c66cd8870ba8540311b67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:52:16 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d509d298b120b10167783dbcfe145eb23ee8fc39799c66cd8870ba8540311b67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:52:16 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d509d298b120b10167783dbcfe145eb23ee8fc39799c66cd8870ba8540311b67/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:52:16 np0005536586 podman[248203]: 2025-11-26 12:52:16.290124968 +0000 UTC m=+0.091040382 container init 20f0180f96303c3a5bd5e24c4d3f3da5527083e91676b6fe91f3123c550590a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 07:52:16 np0005536586 podman[248203]: 2025-11-26 12:52:16.294780613 +0000 UTC m=+0.095696017 container start 20f0180f96303c3a5bd5e24c4d3f3da5527083e91676b6fe91f3123c550590a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:52:16 np0005536586 podman[248203]: 2025-11-26 12:52:16.296105093 +0000 UTC m=+0.097020507 container attach 20f0180f96303c3a5bd5e24c4d3f3da5527083e91676b6fe91f3123c550590a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:52:16 np0005536586 podman[248203]: 2025-11-26 12:52:16.216557331 +0000 UTC m=+0.017472756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:52:16 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:52:16 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:52:16 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:52:17 np0005536586 optimistic_neumann[248216]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:52:17 np0005536586 optimistic_neumann[248216]: --> relative data size: 1.0
Nov 26 07:52:17 np0005536586 optimistic_neumann[248216]: --> All data devices are unavailable
Nov 26 07:52:17 np0005536586 systemd[1]: libpod-20f0180f96303c3a5bd5e24c4d3f3da5527083e91676b6fe91f3123c550590a9.scope: Deactivated successfully.
Nov 26 07:52:17 np0005536586 podman[248203]: 2025-11-26 12:52:17.134061142 +0000 UTC m=+0.934976566 container died 20f0180f96303c3a5bd5e24c4d3f3da5527083e91676b6fe91f3123c550590a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 07:52:17 np0005536586 systemd[1]: var-lib-containers-storage-overlay-d509d298b120b10167783dbcfe145eb23ee8fc39799c66cd8870ba8540311b67-merged.mount: Deactivated successfully.
Nov 26 07:52:17 np0005536586 podman[248203]: 2025-11-26 12:52:17.170216895 +0000 UTC m=+0.971132308 container remove 20f0180f96303c3a5bd5e24c4d3f3da5527083e91676b6fe91f3123c550590a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:52:17 np0005536586 systemd[1]: libpod-conmon-20f0180f96303c3a5bd5e24c4d3f3da5527083e91676b6fe91f3123c550590a9.scope: Deactivated successfully.
Nov 26 07:52:17 np0005536586 podman[248387]: 2025-11-26 12:52:17.605897935 +0000 UTC m=+0.028852371 container create 1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williamson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:52:17 np0005536586 systemd[1]: Started libpod-conmon-1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af.scope.
Nov 26 07:52:17 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:52:17 np0005536586 podman[248387]: 2025-11-26 12:52:17.645954449 +0000 UTC m=+0.068908885 container init 1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williamson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:52:17 np0005536586 podman[248387]: 2025-11-26 12:52:17.651555339 +0000 UTC m=+0.074509775 container start 1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williamson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 07:52:17 np0005536586 podman[248387]: 2025-11-26 12:52:17.652858909 +0000 UTC m=+0.075813365 container attach 1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williamson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 07:52:17 np0005536586 epic_williamson[248400]: 167 167
Nov 26 07:52:17 np0005536586 systemd[1]: libpod-1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af.scope: Deactivated successfully.
Nov 26 07:52:17 np0005536586 conmon[248400]: conmon 1811cfeae0cc8bcdae11 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af.scope/container/memory.events
Nov 26 07:52:17 np0005536586 podman[248387]: 2025-11-26 12:52:17.655576728 +0000 UTC m=+0.078531165 container died 1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williamson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:52:17 np0005536586 systemd[1]: var-lib-containers-storage-overlay-387392830b0ec1a4fdf774168af941e07230d1f845a0b0231e4735a232a8ad86-merged.mount: Deactivated successfully.
Nov 26 07:52:17 np0005536586 podman[248387]: 2025-11-26 12:52:17.680027001 +0000 UTC m=+0.102981437 container remove 1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williamson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:52:17 np0005536586 podman[248387]: 2025-11-26 12:52:17.594683782 +0000 UTC m=+0.017638238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:52:17 np0005536586 systemd[1]: libpod-conmon-1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af.scope: Deactivated successfully.
Nov 26 07:52:17 np0005536586 podman[248422]: 2025-11-26 12:52:17.801408951 +0000 UTC m=+0.028635592 container create 4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_euler, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:52:17 np0005536586 systemd[1]: Started libpod-conmon-4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32.scope.
Nov 26 07:52:17 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:52:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa1a418588b73a98b873450d2dda6a6d11c8b76241496c33cdaef49c3a665a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:52:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa1a418588b73a98b873450d2dda6a6d11c8b76241496c33cdaef49c3a665a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:52:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa1a418588b73a98b873450d2dda6a6d11c8b76241496c33cdaef49c3a665a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:52:17 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa1a418588b73a98b873450d2dda6a6d11c8b76241496c33cdaef49c3a665a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:52:17 np0005536586 podman[248422]: 2025-11-26 12:52:17.85407431 +0000 UTC m=+0.081300942 container init 4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_euler, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 07:52:17 np0005536586 podman[248422]: 2025-11-26 12:52:17.858478882 +0000 UTC m=+0.085705513 container start 4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 07:52:17 np0005536586 podman[248422]: 2025-11-26 12:52:17.859731306 +0000 UTC m=+0.086957937 container attach 4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 07:52:17 np0005536586 podman[248422]: 2025-11-26 12:52:17.790469496 +0000 UTC m=+0.017696148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:52:18 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:18 np0005536586 keen_euler[248436]: {
Nov 26 07:52:18 np0005536586 keen_euler[248436]:    "0": [
Nov 26 07:52:18 np0005536586 keen_euler[248436]:        {
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "devices": [
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "/dev/loop3"
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            ],
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "lv_name": "ceph_lv0",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "lv_size": "21470642176",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "name": "ceph_lv0",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "tags": {
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.cluster_name": "ceph",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.crush_device_class": "",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.encrypted": "0",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.osd_id": "0",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.type": "block",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.vdo": "0"
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            },
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "type": "block",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "vg_name": "ceph_vg0"
Nov 26 07:52:18 np0005536586 keen_euler[248436]:        }
Nov 26 07:52:18 np0005536586 keen_euler[248436]:    ],
Nov 26 07:52:18 np0005536586 keen_euler[248436]:    "1": [
Nov 26 07:52:18 np0005536586 keen_euler[248436]:        {
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "devices": [
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "/dev/loop4"
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            ],
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "lv_name": "ceph_lv1",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "lv_size": "21470642176",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "name": "ceph_lv1",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "tags": {
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.cluster_name": "ceph",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.crush_device_class": "",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.encrypted": "0",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.osd_id": "1",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.type": "block",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.vdo": "0"
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            },
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "type": "block",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "vg_name": "ceph_vg1"
Nov 26 07:52:18 np0005536586 keen_euler[248436]:        }
Nov 26 07:52:18 np0005536586 keen_euler[248436]:    ],
Nov 26 07:52:18 np0005536586 keen_euler[248436]:    "2": [
Nov 26 07:52:18 np0005536586 keen_euler[248436]:        {
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "devices": [
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "/dev/loop5"
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            ],
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "lv_name": "ceph_lv2",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "lv_size": "21470642176",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "name": "ceph_lv2",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "tags": {
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.cluster_name": "ceph",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.crush_device_class": "",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.encrypted": "0",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.osd_id": "2",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.type": "block",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:                "ceph.vdo": "0"
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            },
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "type": "block",
Nov 26 07:52:18 np0005536586 keen_euler[248436]:            "vg_name": "ceph_vg2"
Nov 26 07:52:18 np0005536586 keen_euler[248436]:        }
Nov 26 07:52:18 np0005536586 keen_euler[248436]:    ]
Nov 26 07:52:18 np0005536586 keen_euler[248436]: }
Nov 26 07:52:18 np0005536586 systemd[1]: libpod-4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32.scope: Deactivated successfully.
Nov 26 07:52:18 np0005536586 conmon[248436]: conmon 4402ea175c91021f35d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32.scope/container/memory.events
Nov 26 07:52:18 np0005536586 podman[248422]: 2025-11-26 12:52:18.495449954 +0000 UTC m=+0.722676595 container died 4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_euler, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:52:18 np0005536586 systemd[1]: var-lib-containers-storage-overlay-dfa1a418588b73a98b873450d2dda6a6d11c8b76241496c33cdaef49c3a665a7-merged.mount: Deactivated successfully.
Nov 26 07:52:18 np0005536586 podman[248422]: 2025-11-26 12:52:18.524048415 +0000 UTC m=+0.751275047 container remove 4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_euler, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:52:18 np0005536586 systemd[1]: libpod-conmon-4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32.scope: Deactivated successfully.
Nov 26 07:52:18 np0005536586 podman[248585]: 2025-11-26 12:52:18.921249778 +0000 UTC m=+0.025003998 container create 18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cori, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 07:52:18 np0005536586 systemd[1]: Started libpod-conmon-18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af.scope.
Nov 26 07:52:18 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:52:18 np0005536586 podman[248585]: 2025-11-26 12:52:18.956587437 +0000 UTC m=+0.060341657 container init 18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cori, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:52:18 np0005536586 podman[248585]: 2025-11-26 12:52:18.960885207 +0000 UTC m=+0.064639417 container start 18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 07:52:18 np0005536586 podman[248585]: 2025-11-26 12:52:18.962080282 +0000 UTC m=+0.065834492 container attach 18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 07:52:18 np0005536586 frosty_cori[248598]: 167 167
Nov 26 07:52:18 np0005536586 systemd[1]: libpod-18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af.scope: Deactivated successfully.
Nov 26 07:52:18 np0005536586 conmon[248598]: conmon 18128b9db9be79a1c783 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af.scope/container/memory.events
Nov 26 07:52:18 np0005536586 podman[248585]: 2025-11-26 12:52:18.964595059 +0000 UTC m=+0.068349268 container died 18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cori, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 07:52:18 np0005536586 systemd[1]: var-lib-containers-storage-overlay-a12818cf15520940e65982f7de9d9825c0939831b50a2838c5d9c604adcad5c3-merged.mount: Deactivated successfully.
Nov 26 07:52:18 np0005536586 podman[248585]: 2025-11-26 12:52:18.981959929 +0000 UTC m=+0.085714139 container remove 18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:52:18 np0005536586 podman[248585]: 2025-11-26 12:52:18.91144791 +0000 UTC m=+0.015202141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:52:18 np0005536586 systemd[1]: libpod-conmon-18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af.scope: Deactivated successfully.
Nov 26 07:52:19 np0005536586 podman[248620]: 2025-11-26 12:52:19.100320187 +0000 UTC m=+0.026213631 container create 968609ab2edd6f7d27b1e3cf6ed4d30e58c38a254e10c2cb88f19c20086f4dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_germain, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 07:52:19 np0005536586 systemd[1]: Started libpod-conmon-968609ab2edd6f7d27b1e3cf6ed4d30e58c38a254e10c2cb88f19c20086f4dc8.scope.
Nov 26 07:52:19 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:52:19 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f33e0bd0f2f1c7b92835e6a8fe3cd01469dd8f033fa525d541714c7c9b360ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:52:19 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f33e0bd0f2f1c7b92835e6a8fe3cd01469dd8f033fa525d541714c7c9b360ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:52:19 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f33e0bd0f2f1c7b92835e6a8fe3cd01469dd8f033fa525d541714c7c9b360ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:52:19 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f33e0bd0f2f1c7b92835e6a8fe3cd01469dd8f033fa525d541714c7c9b360ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:52:19 np0005536586 podman[248620]: 2025-11-26 12:52:19.158070382 +0000 UTC m=+0.083963826 container init 968609ab2edd6f7d27b1e3cf6ed4d30e58c38a254e10c2cb88f19c20086f4dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 26 07:52:19 np0005536586 podman[248620]: 2025-11-26 12:52:19.163058546 +0000 UTC m=+0.088951990 container start 968609ab2edd6f7d27b1e3cf6ed4d30e58c38a254e10c2cb88f19c20086f4dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_germain, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 07:52:19 np0005536586 podman[248620]: 2025-11-26 12:52:19.164454661 +0000 UTC m=+0.090348105 container attach 968609ab2edd6f7d27b1e3cf6ed4d30e58c38a254e10c2cb88f19c20086f4dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 07:52:19 np0005536586 podman[248620]: 2025-11-26 12:52:19.090149814 +0000 UTC m=+0.016043268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:52:19 np0005536586 practical_germain[248633]: {
Nov 26 07:52:19 np0005536586 practical_germain[248633]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:52:19 np0005536586 practical_germain[248633]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:52:19 np0005536586 practical_germain[248633]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:52:19 np0005536586 practical_germain[248633]:        "osd_id": 1,
Nov 26 07:52:19 np0005536586 practical_germain[248633]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:52:19 np0005536586 practical_germain[248633]:        "type": "bluestore"
Nov 26 07:52:19 np0005536586 practical_germain[248633]:    },
Nov 26 07:52:19 np0005536586 practical_germain[248633]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:52:19 np0005536586 practical_germain[248633]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:52:19 np0005536586 practical_germain[248633]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:52:19 np0005536586 practical_germain[248633]:        "osd_id": 2,
Nov 26 07:52:19 np0005536586 practical_germain[248633]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:52:19 np0005536586 practical_germain[248633]:        "type": "bluestore"
Nov 26 07:52:19 np0005536586 practical_germain[248633]:    },
Nov 26 07:52:19 np0005536586 practical_germain[248633]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:52:19 np0005536586 practical_germain[248633]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:52:19 np0005536586 practical_germain[248633]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:52:19 np0005536586 practical_germain[248633]:        "osd_id": 0,
Nov 26 07:52:19 np0005536586 practical_germain[248633]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:52:19 np0005536586 practical_germain[248633]:        "type": "bluestore"
Nov 26 07:52:19 np0005536586 practical_germain[248633]:    }
Nov 26 07:52:19 np0005536586 practical_germain[248633]: }
Nov 26 07:52:19 np0005536586 systemd[1]: libpod-968609ab2edd6f7d27b1e3cf6ed4d30e58c38a254e10c2cb88f19c20086f4dc8.scope: Deactivated successfully.
Nov 26 07:52:19 np0005536586 podman[248666]: 2025-11-26 12:52:19.944153216 +0000 UTC m=+0.016109243 container died 968609ab2edd6f7d27b1e3cf6ed4d30e58c38a254e10c2cb88f19c20086f4dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_germain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 07:52:19 np0005536586 systemd[1]: var-lib-containers-storage-overlay-1f33e0bd0f2f1c7b92835e6a8fe3cd01469dd8f033fa525d541714c7c9b360ec-merged.mount: Deactivated successfully.
Nov 26 07:52:19 np0005536586 podman[248666]: 2025-11-26 12:52:19.972556088 +0000 UTC m=+0.044512114 container remove 968609ab2edd6f7d27b1e3cf6ed4d30e58c38a254e10c2cb88f19c20086f4dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_germain, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 07:52:19 np0005536586 systemd[1]: libpod-conmon-968609ab2edd6f7d27b1e3cf6ed4d30e58c38a254e10c2cb88f19c20086f4dc8.scope: Deactivated successfully.
Nov 26 07:52:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:52:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:52:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:52:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:52:20 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 4bbb12ed-7710-424f-b06e-83afed18d215 does not exist
Nov 26 07:52:20 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 58402b9a-4304-4c60-b89c-d201e4201e7c does not exist
Nov 26 07:52:20 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 26 07:52:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1403041407' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 26 07:52:20 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14349 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 26 07:52:20 np0005536586 ceph-mgr[75236]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 26 07:52:20 np0005536586 ceph-mgr[75236]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 26 07:52:21 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:52:21 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:52:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:52:22 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:22 np0005536586 nova_compute[247443]: 2025-11-26 12:52:22.820 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:52:22 np0005536586 nova_compute[247443]: 2025-11-26 12:52:22.821 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:52:22 np0005536586 nova_compute[247443]: 2025-11-26 12:52:22.821 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 07:52:22 np0005536586 nova_compute[247443]: 2025-11-26 12:52:22.821 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 07:52:22 np0005536586 nova_compute[247443]: 2025-11-26 12:52:22.830 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 07:52:22 np0005536586 nova_compute[247443]: 2025-11-26 12:52:22.830 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:52:22 np0005536586 nova_compute[247443]: 2025-11-26 12:52:22.831 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:52:22 np0005536586 nova_compute[247443]: 2025-11-26 12:52:22.831 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:52:22 np0005536586 nova_compute[247443]: 2025-11-26 12:52:22.831 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:52:22 np0005536586 nova_compute[247443]: 2025-11-26 12:52:22.831 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:52:22 np0005536586 nova_compute[247443]: 2025-11-26 12:52:22.831 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:52:22 np0005536586 nova_compute[247443]: 2025-11-26 12:52:22.832 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 07:52:22 np0005536586 nova_compute[247443]: 2025-11-26 12:52:22.832 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:52:22 np0005536586 nova_compute[247443]: 2025-11-26 12:52:22.845 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:52:22 np0005536586 nova_compute[247443]: 2025-11-26 12:52:22.845 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:52:22 np0005536586 nova_compute[247443]: 2025-11-26 12:52:22.845 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:52:22 np0005536586 nova_compute[247443]: 2025-11-26 12:52:22.845 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 07:52:22 np0005536586 nova_compute[247443]: 2025-11-26 12:52:22.846 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 07:52:23 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 07:52:23 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/708208664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 07:52:23 np0005536586 nova_compute[247443]: 2025-11-26 12:52:23.172 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 07:52:23 np0005536586 nova_compute[247443]: 2025-11-26 12:52:23.356 247447 WARNING nova.virt.libvirt.driver [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 07:52:23 np0005536586 nova_compute[247443]: 2025-11-26 12:52:23.357 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5222MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 07:52:23 np0005536586 nova_compute[247443]: 2025-11-26 12:52:23.357 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:52:23 np0005536586 nova_compute[247443]: 2025-11-26 12:52:23.357 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:52:23 np0005536586 nova_compute[247443]: 2025-11-26 12:52:23.413 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 07:52:23 np0005536586 nova_compute[247443]: 2025-11-26 12:52:23.414 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 07:52:23 np0005536586 nova_compute[247443]: 2025-11-26 12:52:23.425 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 07:52:23 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 07:52:23 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1965460234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 07:52:23 np0005536586 nova_compute[247443]: 2025-11-26 12:52:23.758 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 07:52:23 np0005536586 nova_compute[247443]: 2025-11-26 12:52:23.762 247447 DEBUG nova.compute.provider_tree [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed in ProviderTree for provider: b5f91a62-c356-4895-a9c1-523d85f8751b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 07:52:23 np0005536586 nova_compute[247443]: 2025-11-26 12:52:23.774 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed for provider b5f91a62-c356-4895-a9c1-523d85f8751b based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 07:52:23 np0005536586 nova_compute[247443]: 2025-11-26 12:52:23.775 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 07:52:23 np0005536586 nova_compute[247443]: 2025-11-26 12:52:23.775 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:52:24 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:52:26 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:28 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:30 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:52:32 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:34 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:52:35
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] pools ['.rgw.root', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.log', 'volumes', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta']
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:52:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:52:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:52:36 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:38 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:40 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:52:41 np0005536586 podman[248772]: 2025-11-26 12:52:41.878304868 +0000 UTC m=+0.044574390 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 26 07:52:41 np0005536586 podman[248773]: 2025-11-26 12:52:41.878603643 +0000 UTC m=+0.045372006 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 07:52:42 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:44 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:52:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 07:52:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:52:46 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:46 np0005536586 podman[248805]: 2025-11-26 12:52:46.89159708 +0000 UTC m=+0.059423493 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 26 07:52:48 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:50 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:52:52 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:54 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:52:56 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:52:58 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:00 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:53:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:53:01.727 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:53:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:53:01.727 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:53:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:53:01.728 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:53:02 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:04 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:53:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:53:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:53:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:53:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:53:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:53:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:53:06 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:08 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:10 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:53:12 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:12 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:53:12.661 159053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:77:ce', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3b:aa:b7:c5:2f'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 07:53:12 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:53:12.662 159053 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 07:53:12 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:53:12.662 159053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1a132c77-5dda-4b90-923d-26a448f3fef6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 07:53:12 np0005536586 podman[248829]: 2025-11-26 12:53:12.883358595 +0000 UTC m=+0.043213645 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 07:53:12 np0005536586 podman[248830]: 2025-11-26 12:53:12.894380065 +0000 UTC m=+0.054439348 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 26 07:53:14 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:53:16 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:17 np0005536586 podman[248862]: 2025-11-26 12:53:17.915158124 +0000 UTC m=+0.080348148 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 07:53:18 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 07:53:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/21348683' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 07:53:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 07:53:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/21348683' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 07:53:20 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:53:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:53:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:53:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:53:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:53:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:53:20 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 5b9e185c-a29b-4c17-8c59-ac70376c536d does not exist
Nov 26 07:53:20 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 7b38ebbb-1d79-43e8-9ad3-a2243f19b4c8 does not exist
Nov 26 07:53:20 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 0b845ad8-9658-41f1-88b7-1fdfd296be15 does not exist
Nov 26 07:53:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:53:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:53:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:53:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:53:20 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:53:20 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:53:21 np0005536586 podman[249144]: 2025-11-26 12:53:21.00823178 +0000 UTC m=+0.026678200 container create 2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_feistel, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:53:21 np0005536586 systemd[1]: Started libpod-conmon-2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d.scope.
Nov 26 07:53:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:53:21 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:53:21 np0005536586 podman[249144]: 2025-11-26 12:53:21.075802432 +0000 UTC m=+0.094248853 container init 2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 07:53:21 np0005536586 podman[249144]: 2025-11-26 12:53:21.081445806 +0000 UTC m=+0.099892236 container start 2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:53:21 np0005536586 podman[249144]: 2025-11-26 12:53:21.083028148 +0000 UTC m=+0.101474568 container attach 2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_feistel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:53:21 np0005536586 hopeful_feistel[249157]: 167 167
Nov 26 07:53:21 np0005536586 systemd[1]: libpod-2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d.scope: Deactivated successfully.
Nov 26 07:53:21 np0005536586 conmon[249157]: conmon 2efd60a3899f0f453a15 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d.scope/container/memory.events
Nov 26 07:53:21 np0005536586 podman[249144]: 2025-11-26 12:53:21.087377333 +0000 UTC m=+0.105823752 container died 2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_feistel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:53:21 np0005536586 podman[249144]: 2025-11-26 12:53:20.997625706 +0000 UTC m=+0.016072136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:53:21 np0005536586 systemd[1]: var-lib-containers-storage-overlay-d24d9f3fc863308279a777bca23c2126d6e792f848a46201a82509ff065dd9cf-merged.mount: Deactivated successfully.
Nov 26 07:53:21 np0005536586 podman[249144]: 2025-11-26 12:53:21.104556944 +0000 UTC m=+0.123003363 container remove 2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 07:53:21 np0005536586 systemd[1]: libpod-conmon-2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d.scope: Deactivated successfully.
Nov 26 07:53:21 np0005536586 podman[249179]: 2025-11-26 12:53:21.223058965 +0000 UTC m=+0.027743518 container create 0c6c70913c3c9c5eb01615e2043641dc1adb2a4cabb9942b7161a438220566a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brahmagupta, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 07:53:21 np0005536586 systemd[1]: Started libpod-conmon-0c6c70913c3c9c5eb01615e2043641dc1adb2a4cabb9942b7161a438220566a2.scope.
Nov 26 07:53:21 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:53:21 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b7cb2fd1453fdd8c98b1c5a0822e47a606ae1e516c3acf542dd2d26e05d4b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:53:21 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b7cb2fd1453fdd8c98b1c5a0822e47a606ae1e516c3acf542dd2d26e05d4b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:53:21 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b7cb2fd1453fdd8c98b1c5a0822e47a606ae1e516c3acf542dd2d26e05d4b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:53:21 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b7cb2fd1453fdd8c98b1c5a0822e47a606ae1e516c3acf542dd2d26e05d4b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:53:21 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b7cb2fd1453fdd8c98b1c5a0822e47a606ae1e516c3acf542dd2d26e05d4b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:53:21 np0005536586 podman[249179]: 2025-11-26 12:53:21.275262291 +0000 UTC m=+0.079946844 container init 0c6c70913c3c9c5eb01615e2043641dc1adb2a4cabb9942b7161a438220566a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 07:53:21 np0005536586 podman[249179]: 2025-11-26 12:53:21.282737475 +0000 UTC m=+0.087422028 container start 0c6c70913c3c9c5eb01615e2043641dc1adb2a4cabb9942b7161a438220566a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brahmagupta, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 07:53:21 np0005536586 podman[249179]: 2025-11-26 12:53:21.285336402 +0000 UTC m=+0.090020956 container attach 0c6c70913c3c9c5eb01615e2043641dc1adb2a4cabb9942b7161a438220566a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brahmagupta, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 07:53:21 np0005536586 podman[249179]: 2025-11-26 12:53:21.211551442 +0000 UTC m=+0.016236014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:53:21 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:53:21 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:53:21 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:53:22 np0005536586 funny_brahmagupta[249192]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:53:22 np0005536586 funny_brahmagupta[249192]: --> relative data size: 1.0
Nov 26 07:53:22 np0005536586 funny_brahmagupta[249192]: --> All data devices are unavailable
Nov 26 07:53:22 np0005536586 systemd[1]: libpod-0c6c70913c3c9c5eb01615e2043641dc1adb2a4cabb9942b7161a438220566a2.scope: Deactivated successfully.
Nov 26 07:53:22 np0005536586 podman[249179]: 2025-11-26 12:53:22.116608873 +0000 UTC m=+0.921293426 container died 0c6c70913c3c9c5eb01615e2043641dc1adb2a4cabb9942b7161a438220566a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:53:22 np0005536586 systemd[1]: var-lib-containers-storage-overlay-d8b7cb2fd1453fdd8c98b1c5a0822e47a606ae1e516c3acf542dd2d26e05d4b6-merged.mount: Deactivated successfully.
Nov 26 07:53:22 np0005536586 podman[249179]: 2025-11-26 12:53:22.150970349 +0000 UTC m=+0.955654902 container remove 0c6c70913c3c9c5eb01615e2043641dc1adb2a4cabb9942b7161a438220566a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 07:53:22 np0005536586 systemd[1]: libpod-conmon-0c6c70913c3c9c5eb01615e2043641dc1adb2a4cabb9942b7161a438220566a2.scope: Deactivated successfully.
Nov 26 07:53:22 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:22 np0005536586 podman[249361]: 2025-11-26 12:53:22.580117783 +0000 UTC m=+0.027265396 container create d321d524916f0f4b641513d1b5992cbd2a31138478be2bd275f655cd6dfa1f42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 26 07:53:22 np0005536586 systemd[1]: Started libpod-conmon-d321d524916f0f4b641513d1b5992cbd2a31138478be2bd275f655cd6dfa1f42.scope.
Nov 26 07:53:22 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:53:22 np0005536586 podman[249361]: 2025-11-26 12:53:22.636271132 +0000 UTC m=+0.083418755 container init d321d524916f0f4b641513d1b5992cbd2a31138478be2bd275f655cd6dfa1f42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:53:22 np0005536586 podman[249361]: 2025-11-26 12:53:22.641542154 +0000 UTC m=+0.088689767 container start d321d524916f0f4b641513d1b5992cbd2a31138478be2bd275f655cd6dfa1f42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 07:53:22 np0005536586 podman[249361]: 2025-11-26 12:53:22.642672934 +0000 UTC m=+0.089820547 container attach d321d524916f0f4b641513d1b5992cbd2a31138478be2bd275f655cd6dfa1f42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:53:22 np0005536586 kind_almeida[249375]: 167 167
Nov 26 07:53:22 np0005536586 systemd[1]: libpod-d321d524916f0f4b641513d1b5992cbd2a31138478be2bd275f655cd6dfa1f42.scope: Deactivated successfully.
Nov 26 07:53:22 np0005536586 podman[249361]: 2025-11-26 12:53:22.645340951 +0000 UTC m=+0.092488564 container died d321d524916f0f4b641513d1b5992cbd2a31138478be2bd275f655cd6dfa1f42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 07:53:22 np0005536586 systemd[1]: var-lib-containers-storage-overlay-5a12844dcbb8aab58932258ef896a97d78f147f8af6d09fd1927cffaa18c407c-merged.mount: Deactivated successfully.
Nov 26 07:53:22 np0005536586 podman[249361]: 2025-11-26 12:53:22.666115397 +0000 UTC m=+0.113263010 container remove d321d524916f0f4b641513d1b5992cbd2a31138478be2bd275f655cd6dfa1f42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 07:53:22 np0005536586 podman[249361]: 2025-11-26 12:53:22.569235017 +0000 UTC m=+0.016382650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:53:22 np0005536586 systemd[1]: libpod-conmon-d321d524916f0f4b641513d1b5992cbd2a31138478be2bd275f655cd6dfa1f42.scope: Deactivated successfully.
Nov 26 07:53:22 np0005536586 podman[249397]: 2025-11-26 12:53:22.790947174 +0000 UTC m=+0.033723866 container create 0fef75dab57a08c531aaf5d2f17b4c55783c3b86705d5cf0ae0dccb303ba2a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 07:53:22 np0005536586 systemd[1]: Started libpod-conmon-0fef75dab57a08c531aaf5d2f17b4c55783c3b86705d5cf0ae0dccb303ba2a65.scope.
Nov 26 07:53:22 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:53:22 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2edda49e8f8befdbbcb086582d1ec96c71a51de189f754d440bdd54263c0be5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:53:22 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2edda49e8f8befdbbcb086582d1ec96c71a51de189f754d440bdd54263c0be5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:53:22 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2edda49e8f8befdbbcb086582d1ec96c71a51de189f754d440bdd54263c0be5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:53:22 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2edda49e8f8befdbbcb086582d1ec96c71a51de189f754d440bdd54263c0be5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:53:22 np0005536586 podman[249397]: 2025-11-26 12:53:22.862278903 +0000 UTC m=+0.105055615 container init 0fef75dab57a08c531aaf5d2f17b4c55783c3b86705d5cf0ae0dccb303ba2a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 07:53:22 np0005536586 podman[249397]: 2025-11-26 12:53:22.870840766 +0000 UTC m=+0.113617459 container start 0fef75dab57a08c531aaf5d2f17b4c55783c3b86705d5cf0ae0dccb303ba2a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 07:53:22 np0005536586 podman[249397]: 2025-11-26 12:53:22.872169199 +0000 UTC m=+0.114945911 container attach 0fef75dab57a08c531aaf5d2f17b4c55783c3b86705d5cf0ae0dccb303ba2a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 07:53:22 np0005536586 podman[249397]: 2025-11-26 12:53:22.776266141 +0000 UTC m=+0.019042853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]: {
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:    "0": [
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:        {
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "devices": [
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "/dev/loop3"
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            ],
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "lv_name": "ceph_lv0",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "lv_size": "21470642176",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "name": "ceph_lv0",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "tags": {
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.cluster_name": "ceph",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.crush_device_class": "",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.encrypted": "0",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.osd_id": "0",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.type": "block",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.vdo": "0"
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            },
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "type": "block",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "vg_name": "ceph_vg0"
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:        }
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:    ],
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:    "1": [
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:        {
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "devices": [
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "/dev/loop4"
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            ],
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "lv_name": "ceph_lv1",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "lv_size": "21470642176",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "name": "ceph_lv1",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "tags": {
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.cluster_name": "ceph",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.crush_device_class": "",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.encrypted": "0",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.osd_id": "1",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.type": "block",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.vdo": "0"
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            },
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "type": "block",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "vg_name": "ceph_vg1"
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:        }
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:    ],
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:    "2": [
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:        {
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "devices": [
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "/dev/loop5"
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            ],
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "lv_name": "ceph_lv2",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "lv_size": "21470642176",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "name": "ceph_lv2",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "tags": {
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.cluster_name": "ceph",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.crush_device_class": "",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.encrypted": "0",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.osd_id": "2",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.type": "block",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:                "ceph.vdo": "0"
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            },
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "type": "block",
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:            "vg_name": "ceph_vg2"
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:        }
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]:    ]
Nov 26 07:53:23 np0005536586 priceless_bouman[249410]: }
Nov 26 07:53:23 np0005536586 systemd[1]: libpod-0fef75dab57a08c531aaf5d2f17b4c55783c3b86705d5cf0ae0dccb303ba2a65.scope: Deactivated successfully.
Nov 26 07:53:23 np0005536586 podman[249419]: 2025-11-26 12:53:23.586992175 +0000 UTC m=+0.024844434 container died 0fef75dab57a08c531aaf5d2f17b4c55783c3b86705d5cf0ae0dccb303ba2a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:53:23 np0005536586 systemd[1]: var-lib-containers-storage-overlay-2edda49e8f8befdbbcb086582d1ec96c71a51de189f754d440bdd54263c0be5b-merged.mount: Deactivated successfully.
Nov 26 07:53:23 np0005536586 podman[249419]: 2025-11-26 12:53:23.628225401 +0000 UTC m=+0.066077648 container remove 0fef75dab57a08c531aaf5d2f17b4c55783c3b86705d5cf0ae0dccb303ba2a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 26 07:53:23 np0005536586 systemd[1]: libpod-conmon-0fef75dab57a08c531aaf5d2f17b4c55783c3b86705d5cf0ae0dccb303ba2a65.scope: Deactivated successfully.
Nov 26 07:53:23 np0005536586 nova_compute[247443]: 2025-11-26 12:53:23.768 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:53:23 np0005536586 nova_compute[247443]: 2025-11-26 12:53:23.818 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:53:23 np0005536586 nova_compute[247443]: 2025-11-26 12:53:23.818 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 07:53:23 np0005536586 nova_compute[247443]: 2025-11-26 12:53:23.818 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 07:53:23 np0005536586 nova_compute[247443]: 2025-11-26 12:53:23.825 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 07:53:23 np0005536586 nova_compute[247443]: 2025-11-26 12:53:23.825 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:53:23 np0005536586 nova_compute[247443]: 2025-11-26 12:53:23.826 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:53:23 np0005536586 nova_compute[247443]: 2025-11-26 12:53:23.826 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:53:23 np0005536586 nova_compute[247443]: 2025-11-26 12:53:23.826 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:53:23 np0005536586 nova_compute[247443]: 2025-11-26 12:53:23.826 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 07:53:23 np0005536586 nova_compute[247443]: 2025-11-26 12:53:23.826 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:53:23 np0005536586 nova_compute[247443]: 2025-11-26 12:53:23.840 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:53:23 np0005536586 nova_compute[247443]: 2025-11-26 12:53:23.840 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:53:23 np0005536586 nova_compute[247443]: 2025-11-26 12:53:23.840 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:53:23 np0005536586 nova_compute[247443]: 2025-11-26 12:53:23.841 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 07:53:23 np0005536586 nova_compute[247443]: 2025-11-26 12:53:23.841 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 07:53:24 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 07:53:24 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2284262952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 07:53:24 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:24 np0005536586 podman[249584]: 2025-11-26 12:53:24.188460751 +0000 UTC m=+0.033455120 container create 54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 26 07:53:24 np0005536586 nova_compute[247443]: 2025-11-26 12:53:24.195 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 07:53:24 np0005536586 systemd[1]: Started libpod-conmon-54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751.scope.
Nov 26 07:53:24 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:53:24 np0005536586 podman[249584]: 2025-11-26 12:53:24.253374536 +0000 UTC m=+0.098368925 container init 54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 07:53:24 np0005536586 podman[249584]: 2025-11-26 12:53:24.259247994 +0000 UTC m=+0.104242364 container start 54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 07:53:24 np0005536586 podman[249584]: 2025-11-26 12:53:24.26056719 +0000 UTC m=+0.105561589 container attach 54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:53:24 np0005536586 wonderful_almeida[249599]: 167 167
Nov 26 07:53:24 np0005536586 systemd[1]: libpod-54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751.scope: Deactivated successfully.
Nov 26 07:53:24 np0005536586 conmon[249599]: conmon 54954175bd2527ece05d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751.scope/container/memory.events
Nov 26 07:53:24 np0005536586 podman[249584]: 2025-11-26 12:53:24.265605894 +0000 UTC m=+0.110600273 container died 54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:53:24 np0005536586 podman[249584]: 2025-11-26 12:53:24.174445763 +0000 UTC m=+0.019440152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:53:24 np0005536586 systemd[1]: var-lib-containers-storage-overlay-8054a82bdd911106064ed1a4bd9416caffd1ea6ebe09560674e8bc8b3dc729ed-merged.mount: Deactivated successfully.
Nov 26 07:53:24 np0005536586 podman[249584]: 2025-11-26 12:53:24.295667037 +0000 UTC m=+0.140661406 container remove 54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 07:53:24 np0005536586 systemd[1]: libpod-conmon-54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751.scope: Deactivated successfully.
Nov 26 07:53:24 np0005536586 podman[249621]: 2025-11-26 12:53:24.442446531 +0000 UTC m=+0.034398477 container create fd150ebb0a365dba6e1e9bc38ce3d2fea3280a7cc999d20fea76c038f5e760bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 26 07:53:24 np0005536586 systemd[1]: Started libpod-conmon-fd150ebb0a365dba6e1e9bc38ce3d2fea3280a7cc999d20fea76c038f5e760bd.scope.
Nov 26 07:53:24 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:53:24 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3917bb2ea876d2fddb90e15b83d7a1c961abfecdbfd394ca9e933e9ff77ad045/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:53:24 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3917bb2ea876d2fddb90e15b83d7a1c961abfecdbfd394ca9e933e9ff77ad045/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:53:24 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3917bb2ea876d2fddb90e15b83d7a1c961abfecdbfd394ca9e933e9ff77ad045/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:53:24 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3917bb2ea876d2fddb90e15b83d7a1c961abfecdbfd394ca9e933e9ff77ad045/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:53:24 np0005536586 podman[249621]: 2025-11-26 12:53:24.515493242 +0000 UTC m=+0.107445198 container init fd150ebb0a365dba6e1e9bc38ce3d2fea3280a7cc999d20fea76c038f5e760bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 07:53:24 np0005536586 podman[249621]: 2025-11-26 12:53:24.520709331 +0000 UTC m=+0.112661276 container start fd150ebb0a365dba6e1e9bc38ce3d2fea3280a7cc999d20fea76c038f5e760bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 07:53:24 np0005536586 podman[249621]: 2025-11-26 12:53:24.523488577 +0000 UTC m=+0.115440523 container attach fd150ebb0a365dba6e1e9bc38ce3d2fea3280a7cc999d20fea76c038f5e760bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:53:24 np0005536586 podman[249621]: 2025-11-26 12:53:24.426934953 +0000 UTC m=+0.018886909 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:53:24 np0005536586 nova_compute[247443]: 2025-11-26 12:53:24.542 247447 WARNING nova.virt.libvirt.driver [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 07:53:24 np0005536586 nova_compute[247443]: 2025-11-26 12:53:24.543 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5188MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 07:53:24 np0005536586 nova_compute[247443]: 2025-11-26 12:53:24.544 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:53:24 np0005536586 nova_compute[247443]: 2025-11-26 12:53:24.544 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:53:24 np0005536586 nova_compute[247443]: 2025-11-26 12:53:24.585 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 07:53:24 np0005536586 nova_compute[247443]: 2025-11-26 12:53:24.585 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 07:53:24 np0005536586 nova_compute[247443]: 2025-11-26 12:53:24.597 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 07:53:24 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 07:53:24 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4166629011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 07:53:24 np0005536586 nova_compute[247443]: 2025-11-26 12:53:24.935 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 07:53:24 np0005536586 nova_compute[247443]: 2025-11-26 12:53:24.939 247447 DEBUG nova.compute.provider_tree [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed in ProviderTree for provider: b5f91a62-c356-4895-a9c1-523d85f8751b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 07:53:24 np0005536586 nova_compute[247443]: 2025-11-26 12:53:24.951 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed for provider b5f91a62-c356-4895-a9c1-523d85f8751b based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 07:53:24 np0005536586 nova_compute[247443]: 2025-11-26 12:53:24.953 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 07:53:24 np0005536586 nova_compute[247443]: 2025-11-26 12:53:24.953 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]: {
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:        "osd_id": 1,
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:        "type": "bluestore"
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:    },
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:        "osd_id": 2,
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:        "type": "bluestore"
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:    },
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:        "osd_id": 0,
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:        "type": "bluestore"
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]:    }
Nov 26 07:53:25 np0005536586 amazing_dewdney[249635]: }
Nov 26 07:53:25 np0005536586 systemd[1]: libpod-fd150ebb0a365dba6e1e9bc38ce3d2fea3280a7cc999d20fea76c038f5e760bd.scope: Deactivated successfully.
Nov 26 07:53:25 np0005536586 podman[249690]: 2025-11-26 12:53:25.387023639 +0000 UTC m=+0.024104360 container died fd150ebb0a365dba6e1e9bc38ce3d2fea3280a7cc999d20fea76c038f5e760bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 07:53:25 np0005536586 systemd[1]: var-lib-containers-storage-overlay-3917bb2ea876d2fddb90e15b83d7a1c961abfecdbfd394ca9e933e9ff77ad045-merged.mount: Deactivated successfully.
Nov 26 07:53:25 np0005536586 podman[249690]: 2025-11-26 12:53:25.417066839 +0000 UTC m=+0.054147539 container remove fd150ebb0a365dba6e1e9bc38ce3d2fea3280a7cc999d20fea76c038f5e760bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 07:53:25 np0005536586 systemd[1]: libpod-conmon-fd150ebb0a365dba6e1e9bc38ce3d2fea3280a7cc999d20fea76c038f5e760bd.scope: Deactivated successfully.
Nov 26 07:53:25 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:53:25 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:53:25 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:53:25 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:53:25 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev a0104314-0bfb-4ed2-95b5-bf6c8626b560 does not exist
Nov 26 07:53:25 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 2aeac5c0-b062-4ad2-a52a-9d4774826cef does not exist
Nov 26 07:53:25 np0005536586 nova_compute[247443]: 2025-11-26 12:53:25.946 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:53:25 np0005536586 nova_compute[247443]: 2025-11-26 12:53:25.947 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:53:25 np0005536586 nova_compute[247443]: 2025-11-26 12:53:25.947 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:53:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:53:26 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:26 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:53:26 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:53:28 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:30 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:53:32 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:34 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:53:35
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'backups', '.rgw.root', 'vms', 'default.rgw.control', 'images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log']
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:53:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:53:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:53:36 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:38 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:40 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:53:42 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:43 np0005536586 podman[249752]: 2025-11-26 12:53:43.892453364 +0000 UTC m=+0.052138275 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 07:53:43 np0005536586 podman[249753]: 2025-11-26 12:53:43.922262934 +0000 UTC m=+0.079200477 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 07:53:44 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:53:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 07:53:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:53:46 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:48 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:48 np0005536586 podman[249785]: 2025-11-26 12:53:48.913294332 +0000 UTC m=+0.071285933 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 07:53:50 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:53:52 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:54 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:53:56 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:53:58 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:00 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:54:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:54:01.728 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:54:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:54:01.729 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:54:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:54:01.729 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:54:02 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:04 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:54:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:54:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:54:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:54:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:54:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:54:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:54:06 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:08 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:10 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:54:12 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:14 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:14 np0005536586 podman[249809]: 2025-11-26 12:54:14.886126001 +0000 UTC m=+0.039735273 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Nov 26 07:54:14 np0005536586 podman[249808]: 2025-11-26 12:54:14.911379746 +0000 UTC m=+0.065280908 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 26 07:54:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:54:16 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:18 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 07:54:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2577967386' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 07:54:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 07:54:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2577967386' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 07:54:19 np0005536586 podman[249841]: 2025-11-26 12:54:19.908559134 +0000 UTC m=+0.070168044 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 26 07:54:20 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:54:22 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:23 np0005536586 nova_compute[247443]: 2025-11-26 12:54:23.818 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:54:23 np0005536586 nova_compute[247443]: 2025-11-26 12:54:23.821 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 07:54:23 np0005536586 nova_compute[247443]: 2025-11-26 12:54:23.821 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 07:54:23 np0005536586 nova_compute[247443]: 2025-11-26 12:54:23.837 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 07:54:23 np0005536586 nova_compute[247443]: 2025-11-26 12:54:23.837 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:54:23 np0005536586 nova_compute[247443]: 2025-11-26 12:54:23.837 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:54:23 np0005536586 nova_compute[247443]: 2025-11-26 12:54:23.837 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:54:23 np0005536586 nova_compute[247443]: 2025-11-26 12:54:23.854 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:54:23 np0005536586 nova_compute[247443]: 2025-11-26 12:54:23.854 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:54:23 np0005536586 nova_compute[247443]: 2025-11-26 12:54:23.854 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:54:23 np0005536586 nova_compute[247443]: 2025-11-26 12:54:23.854 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 07:54:23 np0005536586 nova_compute[247443]: 2025-11-26 12:54:23.855 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 07:54:24 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 07:54:24 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2601247070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 07:54:24 np0005536586 nova_compute[247443]: 2025-11-26 12:54:24.206 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 07:54:24 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:24 np0005536586 nova_compute[247443]: 2025-11-26 12:54:24.435 247447 WARNING nova.virt.libvirt.driver [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 07:54:24 np0005536586 nova_compute[247443]: 2025-11-26 12:54:24.437 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5235MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 07:54:24 np0005536586 nova_compute[247443]: 2025-11-26 12:54:24.437 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:54:24 np0005536586 nova_compute[247443]: 2025-11-26 12:54:24.437 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:54:24 np0005536586 nova_compute[247443]: 2025-11-26 12:54:24.484 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 07:54:24 np0005536586 nova_compute[247443]: 2025-11-26 12:54:24.484 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 07:54:24 np0005536586 nova_compute[247443]: 2025-11-26 12:54:24.501 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 07:54:24 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 07:54:24 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4095758321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 07:54:24 np0005536586 nova_compute[247443]: 2025-11-26 12:54:24.838 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 07:54:24 np0005536586 nova_compute[247443]: 2025-11-26 12:54:24.843 247447 DEBUG nova.compute.provider_tree [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed in ProviderTree for provider: b5f91a62-c356-4895-a9c1-523d85f8751b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 07:54:24 np0005536586 nova_compute[247443]: 2025-11-26 12:54:24.854 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed for provider b5f91a62-c356-4895-a9c1-523d85f8751b based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 07:54:24 np0005536586 nova_compute[247443]: 2025-11-26 12:54:24.855 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 07:54:24 np0005536586 nova_compute[247443]: 2025-11-26 12:54:24.856 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:54:25 np0005536586 nova_compute[247443]: 2025-11-26 12:54:25.838 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:54:25 np0005536586 nova_compute[247443]: 2025-11-26 12:54:25.838 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:54:25 np0005536586 nova_compute[247443]: 2025-11-26 12:54:25.839 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:54:25 np0005536586 nova_compute[247443]: 2025-11-26 12:54:25.839 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 07:54:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:54:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:54:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:54:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:54:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:54:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:54:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:54:26 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 4774f82d-7b5f-4620-bc80-48c703a51331 does not exist
Nov 26 07:54:26 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 2ef7cf1e-5ec8-4c45-9819-382cd4647541 does not exist
Nov 26 07:54:26 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 222bc08e-1562-4a83-9f37-bed2b555a2a5 does not exist
Nov 26 07:54:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:54:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:54:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:54:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:54:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:54:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:54:26 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:26 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:54:26 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:54:26 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:54:26 np0005536586 podman[250169]: 2025-11-26 12:54:26.604349952 +0000 UTC m=+0.030391072 container create 5b2516a2498df3f3cbbad3e17ed69d5793fb27aaf290e8ae445ace3e8b1fa11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:54:26 np0005536586 systemd[1]: Started libpod-conmon-5b2516a2498df3f3cbbad3e17ed69d5793fb27aaf290e8ae445ace3e8b1fa11d.scope.
Nov 26 07:54:26 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:54:26 np0005536586 podman[250169]: 2025-11-26 12:54:26.669084483 +0000 UTC m=+0.095125604 container init 5b2516a2498df3f3cbbad3e17ed69d5793fb27aaf290e8ae445ace3e8b1fa11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poincare, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 07:54:26 np0005536586 podman[250169]: 2025-11-26 12:54:26.67478152 +0000 UTC m=+0.100822641 container start 5b2516a2498df3f3cbbad3e17ed69d5793fb27aaf290e8ae445ace3e8b1fa11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poincare, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 07:54:26 np0005536586 podman[250169]: 2025-11-26 12:54:26.677328681 +0000 UTC m=+0.103369811 container attach 5b2516a2498df3f3cbbad3e17ed69d5793fb27aaf290e8ae445ace3e8b1fa11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:54:26 np0005536586 infallible_poincare[250183]: 167 167
Nov 26 07:54:26 np0005536586 podman[250169]: 2025-11-26 12:54:26.680457389 +0000 UTC m=+0.106498508 container died 5b2516a2498df3f3cbbad3e17ed69d5793fb27aaf290e8ae445ace3e8b1fa11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poincare, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 07:54:26 np0005536586 systemd[1]: libpod-5b2516a2498df3f3cbbad3e17ed69d5793fb27aaf290e8ae445ace3e8b1fa11d.scope: Deactivated successfully.
Nov 26 07:54:26 np0005536586 podman[250169]: 2025-11-26 12:54:26.592038125 +0000 UTC m=+0.018079266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:54:26 np0005536586 systemd[1]: var-lib-containers-storage-overlay-47f574a4a3d630e6fe05b7c54ee50659c1932263d046fed0022051d90f8842df-merged.mount: Deactivated successfully.
Nov 26 07:54:26 np0005536586 podman[250169]: 2025-11-26 12:54:26.706133243 +0000 UTC m=+0.132174363 container remove 5b2516a2498df3f3cbbad3e17ed69d5793fb27aaf290e8ae445ace3e8b1fa11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poincare, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 07:54:26 np0005536586 systemd[1]: libpod-conmon-5b2516a2498df3f3cbbad3e17ed69d5793fb27aaf290e8ae445ace3e8b1fa11d.scope: Deactivated successfully.
Nov 26 07:54:26 np0005536586 nova_compute[247443]: 2025-11-26 12:54:26.819 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:54:26 np0005536586 nova_compute[247443]: 2025-11-26 12:54:26.819 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:54:26 np0005536586 podman[250205]: 2025-11-26 12:54:26.844249825 +0000 UTC m=+0.034922132 container create 9dc87f0f0af99b152e7142f62f93318bca3aaa72c01101530914c955f98e8d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curran, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 07:54:26 np0005536586 systemd[1]: Started libpod-conmon-9dc87f0f0af99b152e7142f62f93318bca3aaa72c01101530914c955f98e8d79.scope.
Nov 26 07:54:26 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:54:26 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987443d2a7cd7cfca3dfa187489d5028cdf2c8cd035ce07da497a3ce3c1f6a0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:54:26 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987443d2a7cd7cfca3dfa187489d5028cdf2c8cd035ce07da497a3ce3c1f6a0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:54:26 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987443d2a7cd7cfca3dfa187489d5028cdf2c8cd035ce07da497a3ce3c1f6a0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:54:26 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987443d2a7cd7cfca3dfa187489d5028cdf2c8cd035ce07da497a3ce3c1f6a0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:54:26 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987443d2a7cd7cfca3dfa187489d5028cdf2c8cd035ce07da497a3ce3c1f6a0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:54:26 np0005536586 podman[250205]: 2025-11-26 12:54:26.908461601 +0000 UTC m=+0.099133908 container init 9dc87f0f0af99b152e7142f62f93318bca3aaa72c01101530914c955f98e8d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 07:54:26 np0005536586 podman[250205]: 2025-11-26 12:54:26.914702905 +0000 UTC m=+0.105375192 container start 9dc87f0f0af99b152e7142f62f93318bca3aaa72c01101530914c955f98e8d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 07:54:26 np0005536586 podman[250205]: 2025-11-26 12:54:26.915894642 +0000 UTC m=+0.106566939 container attach 9dc87f0f0af99b152e7142f62f93318bca3aaa72c01101530914c955f98e8d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:54:26 np0005536586 podman[250205]: 2025-11-26 12:54:26.831967516 +0000 UTC m=+0.022639833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:54:27 np0005536586 wizardly_curran[250218]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:54:27 np0005536586 wizardly_curran[250218]: --> relative data size: 1.0
Nov 26 07:54:27 np0005536586 wizardly_curran[250218]: --> All data devices are unavailable
Nov 26 07:54:27 np0005536586 systemd[1]: libpod-9dc87f0f0af99b152e7142f62f93318bca3aaa72c01101530914c955f98e8d79.scope: Deactivated successfully.
Nov 26 07:54:27 np0005536586 podman[250247]: 2025-11-26 12:54:27.809435719 +0000 UTC m=+0.020797740 container died 9dc87f0f0af99b152e7142f62f93318bca3aaa72c01101530914c955f98e8d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curran, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:54:27 np0005536586 systemd[1]: var-lib-containers-storage-overlay-987443d2a7cd7cfca3dfa187489d5028cdf2c8cd035ce07da497a3ce3c1f6a0c-merged.mount: Deactivated successfully.
Nov 26 07:54:27 np0005536586 podman[250247]: 2025-11-26 12:54:27.843773289 +0000 UTC m=+0.055135310 container remove 9dc87f0f0af99b152e7142f62f93318bca3aaa72c01101530914c955f98e8d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 07:54:27 np0005536586 systemd[1]: libpod-conmon-9dc87f0f0af99b152e7142f62f93318bca3aaa72c01101530914c955f98e8d79.scope: Deactivated successfully.
Nov 26 07:54:28 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:28 np0005536586 podman[250389]: 2025-11-26 12:54:28.341321738 +0000 UTC m=+0.033078006 container create eae31419bc9625d913d85750c0fe9016ba933082eb39b2192ce5c21395a4eed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:54:28 np0005536586 systemd[1]: Started libpod-conmon-eae31419bc9625d913d85750c0fe9016ba933082eb39b2192ce5c21395a4eed7.scope.
Nov 26 07:54:28 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:54:28 np0005536586 podman[250389]: 2025-11-26 12:54:28.402459531 +0000 UTC m=+0.094215788 container init eae31419bc9625d913d85750c0fe9016ba933082eb39b2192ce5c21395a4eed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:54:28 np0005536586 podman[250389]: 2025-11-26 12:54:28.408148875 +0000 UTC m=+0.099905133 container start eae31419bc9625d913d85750c0fe9016ba933082eb39b2192ce5c21395a4eed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 07:54:28 np0005536586 podman[250389]: 2025-11-26 12:54:28.409166393 +0000 UTC m=+0.100922651 container attach eae31419bc9625d913d85750c0fe9016ba933082eb39b2192ce5c21395a4eed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 07:54:28 np0005536586 thirsty_haslett[250402]: 167 167
Nov 26 07:54:28 np0005536586 systemd[1]: libpod-eae31419bc9625d913d85750c0fe9016ba933082eb39b2192ce5c21395a4eed7.scope: Deactivated successfully.
Nov 26 07:54:28 np0005536586 podman[250389]: 2025-11-26 12:54:28.412269412 +0000 UTC m=+0.104025669 container died eae31419bc9625d913d85750c0fe9016ba933082eb39b2192ce5c21395a4eed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 07:54:28 np0005536586 podman[250389]: 2025-11-26 12:54:28.327330138 +0000 UTC m=+0.019086415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:54:28 np0005536586 systemd[1]: var-lib-containers-storage-overlay-0e485d8e7dc2f92dbde2cf3926848945f6e48a719588cbde45f7b43b6ed22f1b-merged.mount: Deactivated successfully.
Nov 26 07:54:28 np0005536586 podman[250389]: 2025-11-26 12:54:28.43312972 +0000 UTC m=+0.124885977 container remove eae31419bc9625d913d85750c0fe9016ba933082eb39b2192ce5c21395a4eed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_haslett, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 07:54:28 np0005536586 systemd[1]: libpod-conmon-eae31419bc9625d913d85750c0fe9016ba933082eb39b2192ce5c21395a4eed7.scope: Deactivated successfully.
Nov 26 07:54:28 np0005536586 podman[250425]: 2025-11-26 12:54:28.571163686 +0000 UTC m=+0.032061882 container create 16b3ac14b29ae11779e7f9316e877f29328fd3222fcd5631c2f044f2e5d59121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 07:54:28 np0005536586 systemd[1]: Started libpod-conmon-16b3ac14b29ae11779e7f9316e877f29328fd3222fcd5631c2f044f2e5d59121.scope.
Nov 26 07:54:28 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:54:28 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e468e3c819b8b3eed797fcad654b12b6c6dd3feaa87c870909d651206bddbf88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:54:28 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e468e3c819b8b3eed797fcad654b12b6c6dd3feaa87c870909d651206bddbf88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:54:28 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e468e3c819b8b3eed797fcad654b12b6c6dd3feaa87c870909d651206bddbf88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:54:28 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e468e3c819b8b3eed797fcad654b12b6c6dd3feaa87c870909d651206bddbf88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:54:28 np0005536586 podman[250425]: 2025-11-26 12:54:28.626106442 +0000 UTC m=+0.087004648 container init 16b3ac14b29ae11779e7f9316e877f29328fd3222fcd5631c2f044f2e5d59121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:54:28 np0005536586 podman[250425]: 2025-11-26 12:54:28.631497322 +0000 UTC m=+0.092395519 container start 16b3ac14b29ae11779e7f9316e877f29328fd3222fcd5631c2f044f2e5d59121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:54:28 np0005536586 podman[250425]: 2025-11-26 12:54:28.633027046 +0000 UTC m=+0.093925262 container attach 16b3ac14b29ae11779e7f9316e877f29328fd3222fcd5631c2f044f2e5d59121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_panini, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:54:28 np0005536586 podman[250425]: 2025-11-26 12:54:28.556853304 +0000 UTC m=+0.017751510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]: {
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:    "0": [
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:        {
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "devices": [
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "/dev/loop3"
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            ],
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "lv_name": "ceph_lv0",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "lv_size": "21470642176",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "name": "ceph_lv0",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "tags": {
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.cluster_name": "ceph",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.crush_device_class": "",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.encrypted": "0",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.osd_id": "0",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.type": "block",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.vdo": "0"
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            },
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "type": "block",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "vg_name": "ceph_vg0"
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:        }
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:    ],
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:    "1": [
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:        {
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "devices": [
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "/dev/loop4"
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            ],
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "lv_name": "ceph_lv1",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "lv_size": "21470642176",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "name": "ceph_lv1",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "tags": {
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.cluster_name": "ceph",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.crush_device_class": "",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.encrypted": "0",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.osd_id": "1",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.type": "block",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.vdo": "0"
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            },
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "type": "block",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "vg_name": "ceph_vg1"
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:        }
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:    ],
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:    "2": [
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:        {
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "devices": [
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "/dev/loop5"
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            ],
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "lv_name": "ceph_lv2",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "lv_size": "21470642176",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "name": "ceph_lv2",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "tags": {
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.cluster_name": "ceph",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.crush_device_class": "",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.encrypted": "0",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.osd_id": "2",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.type": "block",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:                "ceph.vdo": "0"
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            },
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "type": "block",
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:            "vg_name": "ceph_vg2"
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:        }
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]:    ]
Nov 26 07:54:29 np0005536586 beautiful_panini[250438]: }
Nov 26 07:54:29 np0005536586 systemd[1]: libpod-16b3ac14b29ae11779e7f9316e877f29328fd3222fcd5631c2f044f2e5d59121.scope: Deactivated successfully.
Nov 26 07:54:29 np0005536586 podman[250425]: 2025-11-26 12:54:29.324028236 +0000 UTC m=+0.784926431 container died 16b3ac14b29ae11779e7f9316e877f29328fd3222fcd5631c2f044f2e5d59121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_panini, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 07:54:29 np0005536586 systemd[1]: var-lib-containers-storage-overlay-e468e3c819b8b3eed797fcad654b12b6c6dd3feaa87c870909d651206bddbf88-merged.mount: Deactivated successfully.
Nov 26 07:54:29 np0005536586 podman[250425]: 2025-11-26 12:54:29.361310416 +0000 UTC m=+0.822208612 container remove 16b3ac14b29ae11779e7f9316e877f29328fd3222fcd5631c2f044f2e5d59121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:54:29 np0005536586 systemd[1]: libpod-conmon-16b3ac14b29ae11779e7f9316e877f29328fd3222fcd5631c2f044f2e5d59121.scope: Deactivated successfully.
Nov 26 07:54:29 np0005536586 podman[250587]: 2025-11-26 12:54:29.810700277 +0000 UTC m=+0.028267259 container create 8a69d22452b6bbe2bc45603ccd7e29439ec61deafb19fae611beed508bb5f5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 07:54:29 np0005536586 systemd[1]: Started libpod-conmon-8a69d22452b6bbe2bc45603ccd7e29439ec61deafb19fae611beed508bb5f5b6.scope.
Nov 26 07:54:29 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:54:29 np0005536586 podman[250587]: 2025-11-26 12:54:29.867820286 +0000 UTC m=+0.085387290 container init 8a69d22452b6bbe2bc45603ccd7e29439ec61deafb19fae611beed508bb5f5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 07:54:29 np0005536586 podman[250587]: 2025-11-26 12:54:29.872393306 +0000 UTC m=+0.089960289 container start 8a69d22452b6bbe2bc45603ccd7e29439ec61deafb19fae611beed508bb5f5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:54:29 np0005536586 podman[250587]: 2025-11-26 12:54:29.873696832 +0000 UTC m=+0.091263815 container attach 8a69d22452b6bbe2bc45603ccd7e29439ec61deafb19fae611beed508bb5f5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 07:54:29 np0005536586 systemd[1]: libpod-8a69d22452b6bbe2bc45603ccd7e29439ec61deafb19fae611beed508bb5f5b6.scope: Deactivated successfully.
Nov 26 07:54:29 np0005536586 hungry_galileo[250600]: 167 167
Nov 26 07:54:29 np0005536586 podman[250587]: 2025-11-26 12:54:29.876403123 +0000 UTC m=+0.093970095 container died 8a69d22452b6bbe2bc45603ccd7e29439ec61deafb19fae611beed508bb5f5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_galileo, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:54:29 np0005536586 systemd[1]: var-lib-containers-storage-overlay-96805486b05ad36c0554a42f1b69d9d06d76c11a04886e3efeb3e8fe56a5a832-merged.mount: Deactivated successfully.
Nov 26 07:54:29 np0005536586 podman[250587]: 2025-11-26 12:54:29.894194275 +0000 UTC m=+0.111761259 container remove 8a69d22452b6bbe2bc45603ccd7e29439ec61deafb19fae611beed508bb5f5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_galileo, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:54:29 np0005536586 podman[250587]: 2025-11-26 12:54:29.799791215 +0000 UTC m=+0.017358218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:54:29 np0005536586 systemd[1]: libpod-conmon-8a69d22452b6bbe2bc45603ccd7e29439ec61deafb19fae611beed508bb5f5b6.scope: Deactivated successfully.
Nov 26 07:54:30 np0005536586 podman[250622]: 2025-11-26 12:54:30.020902957 +0000 UTC m=+0.029221007 container create eeb432dc1833b73824d2e77ade9b2f8d9a53868dcbc77c15fd5d1546407a12f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:54:30 np0005536586 systemd[1]: Started libpod-conmon-eeb432dc1833b73824d2e77ade9b2f8d9a53868dcbc77c15fd5d1546407a12f8.scope.
Nov 26 07:54:30 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:54:30 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8cdcd23ac0aec701b27a39457aeb2ee76d70486b2d93c8a860726595976d5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:54:30 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8cdcd23ac0aec701b27a39457aeb2ee76d70486b2d93c8a860726595976d5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:54:30 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8cdcd23ac0aec701b27a39457aeb2ee76d70486b2d93c8a860726595976d5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:54:30 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8cdcd23ac0aec701b27a39457aeb2ee76d70486b2d93c8a860726595976d5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:54:30 np0005536586 podman[250622]: 2025-11-26 12:54:30.084986492 +0000 UTC m=+0.093304552 container init eeb432dc1833b73824d2e77ade9b2f8d9a53868dcbc77c15fd5d1546407a12f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:54:30 np0005536586 podman[250622]: 2025-11-26 12:54:30.090104207 +0000 UTC m=+0.098422257 container start eeb432dc1833b73824d2e77ade9b2f8d9a53868dcbc77c15fd5d1546407a12f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 07:54:30 np0005536586 podman[250622]: 2025-11-26 12:54:30.091496772 +0000 UTC m=+0.099814823 container attach eeb432dc1833b73824d2e77ade9b2f8d9a53868dcbc77c15fd5d1546407a12f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 07:54:30 np0005536586 podman[250622]: 2025-11-26 12:54:30.008776511 +0000 UTC m=+0.017094581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:54:30 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]: {
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:        "osd_id": 1,
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:        "type": "bluestore"
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:    },
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:        "osd_id": 2,
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:        "type": "bluestore"
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:    },
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:        "osd_id": 0,
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:        "type": "bluestore"
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]:    }
Nov 26 07:54:30 np0005536586 inspiring_kalam[250635]: }
Nov 26 07:54:30 np0005536586 systemd[1]: libpod-eeb432dc1833b73824d2e77ade9b2f8d9a53868dcbc77c15fd5d1546407a12f8.scope: Deactivated successfully.
Nov 26 07:54:30 np0005536586 podman[250622]: 2025-11-26 12:54:30.869295829 +0000 UTC m=+0.877613880 container died eeb432dc1833b73824d2e77ade9b2f8d9a53868dcbc77c15fd5d1546407a12f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 07:54:30 np0005536586 systemd[1]: var-lib-containers-storage-overlay-ec8cdcd23ac0aec701b27a39457aeb2ee76d70486b2d93c8a860726595976d5b-merged.mount: Deactivated successfully.
Nov 26 07:54:30 np0005536586 podman[250622]: 2025-11-26 12:54:30.903690406 +0000 UTC m=+0.912008456 container remove eeb432dc1833b73824d2e77ade9b2f8d9a53868dcbc77c15fd5d1546407a12f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 07:54:30 np0005536586 systemd[1]: libpod-conmon-eeb432dc1833b73824d2e77ade9b2f8d9a53868dcbc77c15fd5d1546407a12f8.scope: Deactivated successfully.
Nov 26 07:54:30 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:54:30 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:54:30 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:54:30 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:54:30 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev fc069475-44ba-435d-8205-568f4723662f does not exist
Nov 26 07:54:30 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 0f0d6cfb-d1be-4dc4-a285-3848263945ca does not exist
Nov 26 07:54:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:54:31 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:54:31 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:54:32 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:34 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:54:35
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] pools ['images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'vms', '.mgr']
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:54:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:54:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:54:36 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:38 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:40 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:54:42 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:44 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:54:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 07:54:45 np0005536586 podman[250728]: 2025-11-26 12:54:45.88823437 +0000 UTC m=+0.048096584 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 07:54:45 np0005536586 podman[250729]: 2025-11-26 12:54:45.889276284 +0000 UTC m=+0.049568669 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.063314) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161686063357, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1772, "num_deletes": 250, "total_data_size": 2897002, "memory_usage": 2936008, "flush_reason": "Manual Compaction"}
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161686068439, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1649321, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11722, "largest_seqno": 13493, "table_properties": {"data_size": 1643468, "index_size": 2928, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14922, "raw_average_key_size": 20, "raw_value_size": 1630474, "raw_average_value_size": 2212, "num_data_blocks": 135, "num_entries": 737, "num_filter_entries": 737, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764161492, "oldest_key_time": 1764161492, "file_creation_time": 1764161686, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 5148 microseconds, and 3947 cpu microseconds.
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.068467) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1649321 bytes OK
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.068483) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.068854) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.068867) EVENT_LOG_v1 {"time_micros": 1764161686068863, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.068878) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2889436, prev total WAL file size 2889436, number of live WAL files 2.
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.069707) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1610KB)], [29(7835KB)]
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161686069734, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9672651, "oldest_snapshot_seqno": -1}
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4003 keys, 7568024 bytes, temperature: kUnknown
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161686084974, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7568024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7539429, "index_size": 17477, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95305, "raw_average_key_size": 23, "raw_value_size": 7465432, "raw_average_value_size": 1864, "num_data_blocks": 763, "num_entries": 4003, "num_filter_entries": 4003, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160613, "oldest_key_time": 0, "file_creation_time": 1764161686, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.085095) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7568024 bytes
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.085447) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 633.4 rd, 495.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.7 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(10.5) write-amplify(4.6) OK, records in: 4424, records dropped: 421 output_compression: NoCompression
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.085459) EVENT_LOG_v1 {"time_micros": 1764161686085454, "job": 12, "event": "compaction_finished", "compaction_time_micros": 15271, "compaction_time_cpu_micros": 12728, "output_level": 6, "num_output_files": 1, "total_output_size": 7568024, "num_input_records": 4424, "num_output_records": 4003, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161686085681, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161686086599, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.069658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.086618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.086621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.086622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.086623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:54:46 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.086624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:54:46 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:48 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:50 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:50 np0005536586 podman[250761]: 2025-11-26 12:54:50.922413329 +0000 UTC m=+0.089872223 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 07:54:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:54:52 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:54 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:54:56 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:54:58 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:00 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:55:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:55:01.729 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:55:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:55:01.729 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:55:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:55:01.729 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:55:02 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:04 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:55:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:55:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:55:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:55:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:55:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:55:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:55:06 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:08 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:10 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:55:12 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:14 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:55:16 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:16 np0005536586 podman[250785]: 2025-11-26 12:55:16.885257496 +0000 UTC m=+0.044056570 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 26 07:55:16 np0005536586 podman[250786]: 2025-11-26 12:55:16.894225819 +0000 UTC m=+0.049885084 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 26 07:55:18 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 07:55:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3761241431' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 07:55:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 07:55:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3761241431' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 07:55:20 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:55:21 np0005536586 podman[250818]: 2025-11-26 12:55:21.902372084 +0000 UTC m=+0.066182622 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 26 07:55:22 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:23 np0005536586 nova_compute[247443]: 2025-11-26 12:55:23.815 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:55:23 np0005536586 nova_compute[247443]: 2025-11-26 12:55:23.826 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:55:23 np0005536586 nova_compute[247443]: 2025-11-26 12:55:23.827 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 07:55:23 np0005536586 nova_compute[247443]: 2025-11-26 12:55:23.827 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 07:55:23 np0005536586 nova_compute[247443]: 2025-11-26 12:55:23.834 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 07:55:23 np0005536586 nova_compute[247443]: 2025-11-26 12:55:23.834 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:55:24 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:24 np0005536586 nova_compute[247443]: 2025-11-26 12:55:24.819 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:55:24 np0005536586 nova_compute[247443]: 2025-11-26 12:55:24.819 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 07:55:25 np0005536586 nova_compute[247443]: 2025-11-26 12:55:25.815 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:55:25 np0005536586 nova_compute[247443]: 2025-11-26 12:55:25.818 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:55:25 np0005536586 nova_compute[247443]: 2025-11-26 12:55:25.818 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:55:25 np0005536586 nova_compute[247443]: 2025-11-26 12:55:25.837 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:55:25 np0005536586 nova_compute[247443]: 2025-11-26 12:55:25.838 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:55:25 np0005536586 nova_compute[247443]: 2025-11-26 12:55:25.838 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:55:25 np0005536586 nova_compute[247443]: 2025-11-26 12:55:25.838 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 07:55:25 np0005536586 nova_compute[247443]: 2025-11-26 12:55:25.838 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 07:55:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:55:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 07:55:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/71165918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 07:55:26 np0005536586 nova_compute[247443]: 2025-11-26 12:55:26.166 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 07:55:26 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:26 np0005536586 nova_compute[247443]: 2025-11-26 12:55:26.372 247447 WARNING nova.virt.libvirt.driver [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 07:55:26 np0005536586 nova_compute[247443]: 2025-11-26 12:55:26.373 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5222MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 07:55:26 np0005536586 nova_compute[247443]: 2025-11-26 12:55:26.373 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:55:26 np0005536586 nova_compute[247443]: 2025-11-26 12:55:26.373 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:55:26 np0005536586 nova_compute[247443]: 2025-11-26 12:55:26.416 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 07:55:26 np0005536586 nova_compute[247443]: 2025-11-26 12:55:26.416 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 07:55:26 np0005536586 nova_compute[247443]: 2025-11-26 12:55:26.428 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 07:55:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 07:55:26 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/134117797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 07:55:26 np0005536586 nova_compute[247443]: 2025-11-26 12:55:26.759 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 07:55:26 np0005536586 nova_compute[247443]: 2025-11-26 12:55:26.763 247447 DEBUG nova.compute.provider_tree [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed in ProviderTree for provider: b5f91a62-c356-4895-a9c1-523d85f8751b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 07:55:26 np0005536586 nova_compute[247443]: 2025-11-26 12:55:26.774 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed for provider b5f91a62-c356-4895-a9c1-523d85f8751b based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 07:55:26 np0005536586 nova_compute[247443]: 2025-11-26 12:55:26.775 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 07:55:26 np0005536586 nova_compute[247443]: 2025-11-26 12:55:26.775 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:55:27 np0005536586 nova_compute[247443]: 2025-11-26 12:55:27.776 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:55:27 np0005536586 nova_compute[247443]: 2025-11-26 12:55:27.818 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:55:28 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:28 np0005536586 nova_compute[247443]: 2025-11-26 12:55:28.819 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:55:30 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:55:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:55:31 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:55:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:55:31 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:55:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:55:31 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:55:31 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev be587424-8db2-4519-aac1-36b0b7b29987 does not exist
Nov 26 07:55:31 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 0298f505-f118-4a8b-b7fe-717fc56dcd4a does not exist
Nov 26 07:55:31 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev fccfecd7-7114-47ae-bb85-65357551b3b5 does not exist
Nov 26 07:55:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:55:31 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:55:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:55:31 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:55:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:55:31 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:55:31 np0005536586 podman[251146]: 2025-11-26 12:55:31.998928121 +0000 UTC m=+0.028570981 container create f1be956df823da680fefc5ce16a7183146e424b38022b9242f2cce60001edab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 07:55:32 np0005536586 systemd[1]: Started libpod-conmon-f1be956df823da680fefc5ce16a7183146e424b38022b9242f2cce60001edab1.scope.
Nov 26 07:55:32 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:55:32 np0005536586 podman[251146]: 2025-11-26 12:55:32.067183329 +0000 UTC m=+0.096826179 container init f1be956df823da680fefc5ce16a7183146e424b38022b9242f2cce60001edab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kare, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:55:32 np0005536586 podman[251146]: 2025-11-26 12:55:32.073440742 +0000 UTC m=+0.103083592 container start f1be956df823da680fefc5ce16a7183146e424b38022b9242f2cce60001edab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kare, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 07:55:32 np0005536586 podman[251146]: 2025-11-26 12:55:32.074492956 +0000 UTC m=+0.104135807 container attach f1be956df823da680fefc5ce16a7183146e424b38022b9242f2cce60001edab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:55:32 np0005536586 infallible_kare[251159]: 167 167
Nov 26 07:55:32 np0005536586 systemd[1]: libpod-f1be956df823da680fefc5ce16a7183146e424b38022b9242f2cce60001edab1.scope: Deactivated successfully.
Nov 26 07:55:32 np0005536586 podman[251146]: 2025-11-26 12:55:32.07843736 +0000 UTC m=+0.108080210 container died f1be956df823da680fefc5ce16a7183146e424b38022b9242f2cce60001edab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 07:55:32 np0005536586 podman[251146]: 2025-11-26 12:55:31.987990676 +0000 UTC m=+0.017633546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:55:32 np0005536586 systemd[1]: var-lib-containers-storage-overlay-28f8e0d5fdfeac7bfc30a5cb268fc34e54b85c2c2c0254b77f192f9649941046-merged.mount: Deactivated successfully.
Nov 26 07:55:32 np0005536586 podman[251146]: 2025-11-26 12:55:32.09947958 +0000 UTC m=+0.129122430 container remove f1be956df823da680fefc5ce16a7183146e424b38022b9242f2cce60001edab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:55:32 np0005536586 systemd[1]: libpod-conmon-f1be956df823da680fefc5ce16a7183146e424b38022b9242f2cce60001edab1.scope: Deactivated successfully.
Nov 26 07:55:32 np0005536586 podman[251181]: 2025-11-26 12:55:32.220537 +0000 UTC m=+0.027562601 container create 5eea7f0beebf336fce8692f97e3050d192c8924e4c26bae68163aad487125665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jang, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 26 07:55:32 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:32 np0005536586 systemd[1]: Started libpod-conmon-5eea7f0beebf336fce8692f97e3050d192c8924e4c26bae68163aad487125665.scope.
Nov 26 07:55:32 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:55:32 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2852556dcc9f5e16720b5dcaa771aa5692ea4ca429de137781ac08e3149e0a1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:55:32 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2852556dcc9f5e16720b5dcaa771aa5692ea4ca429de137781ac08e3149e0a1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:55:32 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2852556dcc9f5e16720b5dcaa771aa5692ea4ca429de137781ac08e3149e0a1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:55:32 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2852556dcc9f5e16720b5dcaa771aa5692ea4ca429de137781ac08e3149e0a1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:55:32 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2852556dcc9f5e16720b5dcaa771aa5692ea4ca429de137781ac08e3149e0a1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:55:32 np0005536586 podman[251181]: 2025-11-26 12:55:32.283420413 +0000 UTC m=+0.090446014 container init 5eea7f0beebf336fce8692f97e3050d192c8924e4c26bae68163aad487125665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 07:55:32 np0005536586 podman[251181]: 2025-11-26 12:55:32.290094051 +0000 UTC m=+0.097119642 container start 5eea7f0beebf336fce8692f97e3050d192c8924e4c26bae68163aad487125665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 07:55:32 np0005536586 podman[251181]: 2025-11-26 12:55:32.293136835 +0000 UTC m=+0.100162447 container attach 5eea7f0beebf336fce8692f97e3050d192c8924e4c26bae68163aad487125665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jang, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:55:32 np0005536586 podman[251181]: 2025-11-26 12:55:32.210123303 +0000 UTC m=+0.017148904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:55:32 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:55:32 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:55:32 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:55:33 np0005536586 exciting_jang[251194]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:55:33 np0005536586 exciting_jang[251194]: --> relative data size: 1.0
Nov 26 07:55:33 np0005536586 exciting_jang[251194]: --> All data devices are unavailable
Nov 26 07:55:33 np0005536586 systemd[1]: libpod-5eea7f0beebf336fce8692f97e3050d192c8924e4c26bae68163aad487125665.scope: Deactivated successfully.
Nov 26 07:55:33 np0005536586 podman[251181]: 2025-11-26 12:55:33.11177695 +0000 UTC m=+0.918802542 container died 5eea7f0beebf336fce8692f97e3050d192c8924e4c26bae68163aad487125665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 07:55:33 np0005536586 systemd[1]: var-lib-containers-storage-overlay-2852556dcc9f5e16720b5dcaa771aa5692ea4ca429de137781ac08e3149e0a1e-merged.mount: Deactivated successfully.
Nov 26 07:55:33 np0005536586 podman[251181]: 2025-11-26 12:55:33.144448901 +0000 UTC m=+0.951474492 container remove 5eea7f0beebf336fce8692f97e3050d192c8924e4c26bae68163aad487125665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jang, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:55:33 np0005536586 systemd[1]: libpod-conmon-5eea7f0beebf336fce8692f97e3050d192c8924e4c26bae68163aad487125665.scope: Deactivated successfully.
Nov 26 07:55:33 np0005536586 podman[251366]: 2025-11-26 12:55:33.592194032 +0000 UTC m=+0.027042050 container create a408adc1f030c23b82e7d80395ff9f2bb9eb1b3e45766efbdd278d861f5f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 07:55:33 np0005536586 systemd[1]: Started libpod-conmon-a408adc1f030c23b82e7d80395ff9f2bb9eb1b3e45766efbdd278d861f5f614c.scope.
Nov 26 07:55:33 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:55:33 np0005536586 podman[251366]: 2025-11-26 12:55:33.647826348 +0000 UTC m=+0.082674377 container init a408adc1f030c23b82e7d80395ff9f2bb9eb1b3e45766efbdd278d861f5f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_banach, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 07:55:33 np0005536586 podman[251366]: 2025-11-26 12:55:33.652721884 +0000 UTC m=+0.087569904 container start a408adc1f030c23b82e7d80395ff9f2bb9eb1b3e45766efbdd278d861f5f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 07:55:33 np0005536586 podman[251366]: 2025-11-26 12:55:33.653933609 +0000 UTC m=+0.088781628 container attach a408adc1f030c23b82e7d80395ff9f2bb9eb1b3e45766efbdd278d861f5f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:55:33 np0005536586 relaxed_banach[251379]: 167 167
Nov 26 07:55:33 np0005536586 systemd[1]: libpod-a408adc1f030c23b82e7d80395ff9f2bb9eb1b3e45766efbdd278d861f5f614c.scope: Deactivated successfully.
Nov 26 07:55:33 np0005536586 podman[251366]: 2025-11-26 12:55:33.656378136 +0000 UTC m=+0.091226155 container died a408adc1f030c23b82e7d80395ff9f2bb9eb1b3e45766efbdd278d861f5f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_banach, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:55:33 np0005536586 systemd[1]: var-lib-containers-storage-overlay-e8afec17353f304ef5d30e597ce2ae1fe08f6a5928b00a8c3ad7aa7c3e2f829d-merged.mount: Deactivated successfully.
Nov 26 07:55:33 np0005536586 podman[251366]: 2025-11-26 12:55:33.67255678 +0000 UTC m=+0.107404798 container remove a408adc1f030c23b82e7d80395ff9f2bb9eb1b3e45766efbdd278d861f5f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 07:55:33 np0005536586 podman[251366]: 2025-11-26 12:55:33.581843042 +0000 UTC m=+0.016691081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:55:33 np0005536586 systemd[1]: libpod-conmon-a408adc1f030c23b82e7d80395ff9f2bb9eb1b3e45766efbdd278d861f5f614c.scope: Deactivated successfully.
Nov 26 07:55:33 np0005536586 podman[251400]: 2025-11-26 12:55:33.794422902 +0000 UTC m=+0.028037105 container create 5021c4c563a9f324554f52355b7b92ff8f44f709f7e41fe55b46f343803e1a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:55:33 np0005536586 systemd[1]: Started libpod-conmon-5021c4c563a9f324554f52355b7b92ff8f44f709f7e41fe55b46f343803e1a7b.scope.
Nov 26 07:55:33 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:55:33 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6bd37b185f43e69b5baa9b4476765608ff6a19e4bf198021d6dd21b91410b5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:55:33 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6bd37b185f43e69b5baa9b4476765608ff6a19e4bf198021d6dd21b91410b5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:55:33 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6bd37b185f43e69b5baa9b4476765608ff6a19e4bf198021d6dd21b91410b5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:55:33 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6bd37b185f43e69b5baa9b4476765608ff6a19e4bf198021d6dd21b91410b5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:55:33 np0005536586 podman[251400]: 2025-11-26 12:55:33.85189684 +0000 UTC m=+0.085511042 container init 5021c4c563a9f324554f52355b7b92ff8f44f709f7e41fe55b46f343803e1a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:55:33 np0005536586 podman[251400]: 2025-11-26 12:55:33.857824532 +0000 UTC m=+0.091438734 container start 5021c4c563a9f324554f52355b7b92ff8f44f709f7e41fe55b46f343803e1a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 07:55:33 np0005536586 podman[251400]: 2025-11-26 12:55:33.858921119 +0000 UTC m=+0.092535321 container attach 5021c4c563a9f324554f52355b7b92ff8f44f709f7e41fe55b46f343803e1a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 07:55:33 np0005536586 podman[251400]: 2025-11-26 12:55:33.783329093 +0000 UTC m=+0.016943315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:55:34 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]: {
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:    "0": [
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:        {
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "devices": [
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "/dev/loop3"
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            ],
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "lv_name": "ceph_lv0",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "lv_size": "21470642176",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "name": "ceph_lv0",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "tags": {
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.cluster_name": "ceph",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.crush_device_class": "",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.encrypted": "0",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.osd_id": "0",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.type": "block",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.vdo": "0"
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            },
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "type": "block",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "vg_name": "ceph_vg0"
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:        }
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:    ],
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:    "1": [
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:        {
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "devices": [
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "/dev/loop4"
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            ],
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "lv_name": "ceph_lv1",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "lv_size": "21470642176",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "name": "ceph_lv1",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "tags": {
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.cluster_name": "ceph",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.crush_device_class": "",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.encrypted": "0",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.osd_id": "1",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.type": "block",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.vdo": "0"
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            },
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "type": "block",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "vg_name": "ceph_vg1"
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:        }
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:    ],
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:    "2": [
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:        {
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "devices": [
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "/dev/loop5"
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            ],
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "lv_name": "ceph_lv2",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "lv_size": "21470642176",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "name": "ceph_lv2",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "tags": {
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.cluster_name": "ceph",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.crush_device_class": "",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.encrypted": "0",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.osd_id": "2",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.type": "block",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:                "ceph.vdo": "0"
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            },
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "type": "block",
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:            "vg_name": "ceph_vg2"
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:        }
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]:    ]
Nov 26 07:55:34 np0005536586 unruffled_bassi[251413]: }
Nov 26 07:55:34 np0005536586 systemd[1]: libpod-5021c4c563a9f324554f52355b7b92ff8f44f709f7e41fe55b46f343803e1a7b.scope: Deactivated successfully.
Nov 26 07:55:34 np0005536586 podman[251422]: 2025-11-26 12:55:34.525122667 +0000 UTC m=+0.017846678 container died 5021c4c563a9f324554f52355b7b92ff8f44f709f7e41fe55b46f343803e1a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:55:34 np0005536586 systemd[1]: var-lib-containers-storage-overlay-a6bd37b185f43e69b5baa9b4476765608ff6a19e4bf198021d6dd21b91410b5c-merged.mount: Deactivated successfully.
Nov 26 07:55:34 np0005536586 podman[251422]: 2025-11-26 12:55:34.554449141 +0000 UTC m=+0.047173143 container remove 5021c4c563a9f324554f52355b7b92ff8f44f709f7e41fe55b46f343803e1a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 07:55:34 np0005536586 systemd[1]: libpod-conmon-5021c4c563a9f324554f52355b7b92ff8f44f709f7e41fe55b46f343803e1a7b.scope: Deactivated successfully.
Nov 26 07:55:34 np0005536586 podman[251564]: 2025-11-26 12:55:34.964472474 +0000 UTC m=+0.025169972 container create 0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 07:55:34 np0005536586 systemd[1]: Started libpod-conmon-0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae.scope.
Nov 26 07:55:35 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:55:35 np0005536586 podman[251564]: 2025-11-26 12:55:35.014870525 +0000 UTC m=+0.075568022 container init 0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 07:55:35 np0005536586 podman[251564]: 2025-11-26 12:55:35.01955854 +0000 UTC m=+0.080256028 container start 0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 07:55:35 np0005536586 podman[251564]: 2025-11-26 12:55:35.020681988 +0000 UTC m=+0.081379495 container attach 0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 07:55:35 np0005536586 frosty_liskov[251577]: 167 167
Nov 26 07:55:35 np0005536586 systemd[1]: libpod-0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae.scope: Deactivated successfully.
Nov 26 07:55:35 np0005536586 conmon[251577]: conmon 0c2944deb2be9d0631c4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae.scope/container/memory.events
Nov 26 07:55:35 np0005536586 podman[251564]: 2025-11-26 12:55:35.023341971 +0000 UTC m=+0.084039459 container died 0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_liskov, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 07:55:35 np0005536586 systemd[1]: var-lib-containers-storage-overlay-f55d9a376275e0167f042db54dd7b9e4fa272c9257163668c947a4cdcabfa210-merged.mount: Deactivated successfully.
Nov 26 07:55:35 np0005536586 podman[251564]: 2025-11-26 12:55:35.046434376 +0000 UTC m=+0.107131864 container remove 0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 07:55:35 np0005536586 podman[251564]: 2025-11-26 12:55:34.954169635 +0000 UTC m=+0.014867123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:55:35 np0005536586 systemd[1]: libpod-conmon-0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae.scope: Deactivated successfully.
Nov 26 07:55:35 np0005536586 podman[251598]: 2025-11-26 12:55:35.16220424 +0000 UTC m=+0.026294131 container create 329619a0178538c1dfc139b4499fdbc8ebade68e1f790575d12447d583079af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 07:55:35 np0005536586 systemd[1]: Started libpod-conmon-329619a0178538c1dfc139b4499fdbc8ebade68e1f790575d12447d583079af7.scope.
Nov 26 07:55:35 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:55:35 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61537f810cdc2ace1b85a9dbffea889a1d622bc2518d683dbb1432daf7dd22d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:55:35 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61537f810cdc2ace1b85a9dbffea889a1d622bc2518d683dbb1432daf7dd22d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:55:35 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61537f810cdc2ace1b85a9dbffea889a1d622bc2518d683dbb1432daf7dd22d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:55:35 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61537f810cdc2ace1b85a9dbffea889a1d622bc2518d683dbb1432daf7dd22d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:55:35 np0005536586 podman[251598]: 2025-11-26 12:55:35.218601337 +0000 UTC m=+0.082691248 container init 329619a0178538c1dfc139b4499fdbc8ebade68e1f790575d12447d583079af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 07:55:35 np0005536586 podman[251598]: 2025-11-26 12:55:35.224601937 +0000 UTC m=+0.088691838 container start 329619a0178538c1dfc139b4499fdbc8ebade68e1f790575d12447d583079af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 07:55:35 np0005536586 podman[251598]: 2025-11-26 12:55:35.225568328 +0000 UTC m=+0.089658219 container attach 329619a0178538c1dfc139b4499fdbc8ebade68e1f790575d12447d583079af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 07:55:35 np0005536586 podman[251598]: 2025-11-26 12:55:35.152079476 +0000 UTC m=+0.016169368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:55:35
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'vms', '.mgr', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'backups']
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]: {
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:        "osd_id": 1,
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:        "type": "bluestore"
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:    },
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:        "osd_id": 2,
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:        "type": "bluestore"
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:    },
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:        "osd_id": 0,
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:        "type": "bluestore"
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]:    }
Nov 26 07:55:35 np0005536586 frosty_gauss[251611]: }
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:55:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:55:35 np0005536586 systemd[1]: libpod-329619a0178538c1dfc139b4499fdbc8ebade68e1f790575d12447d583079af7.scope: Deactivated successfully.
Nov 26 07:55:35 np0005536586 podman[251598]: 2025-11-26 12:55:35.99144866 +0000 UTC m=+0.855538552 container died 329619a0178538c1dfc139b4499fdbc8ebade68e1f790575d12447d583079af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:55:36 np0005536586 systemd[1]: var-lib-containers-storage-overlay-61537f810cdc2ace1b85a9dbffea889a1d622bc2518d683dbb1432daf7dd22d2-merged.mount: Deactivated successfully.
Nov 26 07:55:36 np0005536586 podman[251598]: 2025-11-26 12:55:36.023095419 +0000 UTC m=+0.887185310 container remove 329619a0178538c1dfc139b4499fdbc8ebade68e1f790575d12447d583079af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 07:55:36 np0005536586 systemd[1]: libpod-conmon-329619a0178538c1dfc139b4499fdbc8ebade68e1f790575d12447d583079af7.scope: Deactivated successfully.
Nov 26 07:55:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:55:36 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:55:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:55:36 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:55:36 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 0f6d0b5d-345f-4c94-8a3c-b56a06abf1e4 does not exist
Nov 26 07:55:36 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 4ef72ba1-cbbd-4051-b5d8-759c1afda08b does not exist
Nov 26 07:55:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:55:36 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:37 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:55:37 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:55:38 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:40 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:55:42 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:44 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:55:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 07:55:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:55:46 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:47 np0005536586 podman[251706]: 2025-11-26 12:55:47.882876396 +0000 UTC m=+0.047373110 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 07:55:47 np0005536586 podman[251705]: 2025-11-26 12:55:47.910348365 +0000 UTC m=+0.074893661 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 07:55:48 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:50 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:55:52 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:52 np0005536586 podman[251739]: 2025-11-26 12:55:52.889554451 +0000 UTC m=+0.056444968 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 26 07:55:54 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:55:56 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:55:58 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:00 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:56:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:56:01.729 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:56:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:56:01.730 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:56:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:56:01.730 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:56:02 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:04 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:56:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:56:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:56:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:56:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:56:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:56:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:56:06 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:08 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:10 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:56:12 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:14 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:56:16 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:18 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 07:56:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3968811267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 07:56:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 07:56:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3968811267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 07:56:18 np0005536586 podman[251762]: 2025-11-26 12:56:18.880947045 +0000 UTC m=+0.039971790 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 07:56:18 np0005536586 podman[251763]: 2025-11-26 12:56:18.88429721 +0000 UTC m=+0.041421974 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 26 07:56:20 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:56:22 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:22 np0005536586 nova_compute[247443]: 2025-11-26 12:56:22.820 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:56:22 np0005536586 nova_compute[247443]: 2025-11-26 12:56:22.820 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 07:56:22 np0005536586 nova_compute[247443]: 2025-11-26 12:56:22.834 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 07:56:22 np0005536586 nova_compute[247443]: 2025-11-26 12:56:22.834 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:56:22 np0005536586 nova_compute[247443]: 2025-11-26 12:56:22.834 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 07:56:22 np0005536586 nova_compute[247443]: 2025-11-26 12:56:22.842 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:56:23 np0005536586 podman[251797]: 2025-11-26 12:56:23.893593361 +0000 UTC m=+0.057706025 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 26 07:56:24 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:24 np0005536586 nova_compute[247443]: 2025-11-26 12:56:24.847 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:56:24 np0005536586 nova_compute[247443]: 2025-11-26 12:56:24.848 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 07:56:24 np0005536586 nova_compute[247443]: 2025-11-26 12:56:24.848 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 07:56:24 np0005536586 nova_compute[247443]: 2025-11-26 12:56:24.859 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 07:56:24 np0005536586 nova_compute[247443]: 2025-11-26 12:56:24.859 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:56:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:56:26 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:26 np0005536586 nova_compute[247443]: 2025-11-26 12:56:26.818 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:56:26 np0005536586 nova_compute[247443]: 2025-11-26 12:56:26.819 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:56:26 np0005536586 nova_compute[247443]: 2025-11-26 12:56:26.819 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 07:56:26 np0005536586 nova_compute[247443]: 2025-11-26 12:56:26.819 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:56:26 np0005536586 nova_compute[247443]: 2025-11-26 12:56:26.839 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:56:26 np0005536586 nova_compute[247443]: 2025-11-26 12:56:26.840 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:56:26 np0005536586 nova_compute[247443]: 2025-11-26 12:56:26.840 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:56:26 np0005536586 nova_compute[247443]: 2025-11-26 12:56:26.840 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 07:56:26 np0005536586 nova_compute[247443]: 2025-11-26 12:56:26.840 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 07:56:27 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 07:56:27 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2536353661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 07:56:27 np0005536586 nova_compute[247443]: 2025-11-26 12:56:27.168 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 07:56:27 np0005536586 nova_compute[247443]: 2025-11-26 12:56:27.359 247447 WARNING nova.virt.libvirt.driver [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 07:56:27 np0005536586 nova_compute[247443]: 2025-11-26 12:56:27.361 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5228MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 07:56:27 np0005536586 nova_compute[247443]: 2025-11-26 12:56:27.361 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:56:27 np0005536586 nova_compute[247443]: 2025-11-26 12:56:27.361 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:56:27 np0005536586 nova_compute[247443]: 2025-11-26 12:56:27.523 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 07:56:27 np0005536586 nova_compute[247443]: 2025-11-26 12:56:27.523 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 07:56:27 np0005536586 nova_compute[247443]: 2025-11-26 12:56:27.597 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Refreshing inventories for resource provider b5f91a62-c356-4895-a9c1-523d85f8751b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 07:56:27 np0005536586 nova_compute[247443]: 2025-11-26 12:56:27.664 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Updating ProviderTree inventory for provider b5f91a62-c356-4895-a9c1-523d85f8751b from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 07:56:27 np0005536586 nova_compute[247443]: 2025-11-26 12:56:27.665 247447 DEBUG nova.compute.provider_tree [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Updating inventory in ProviderTree for provider b5f91a62-c356-4895-a9c1-523d85f8751b with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 07:56:27 np0005536586 nova_compute[247443]: 2025-11-26 12:56:27.678 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Refreshing aggregate associations for resource provider b5f91a62-c356-4895-a9c1-523d85f8751b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 07:56:27 np0005536586 nova_compute[247443]: 2025-11-26 12:56:27.696 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Refreshing trait associations for resource provider b5f91a62-c356-4895-a9c1-523d85f8751b, traits: HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX512VAES,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI2,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE41,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_2_0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 07:56:27 np0005536586 nova_compute[247443]: 2025-11-26 12:56:27.708 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 07:56:28 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 07:56:28 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/252917307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 07:56:28 np0005536586 nova_compute[247443]: 2025-11-26 12:56:28.036 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 07:56:28 np0005536586 nova_compute[247443]: 2025-11-26 12:56:28.040 247447 DEBUG nova.compute.provider_tree [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed in ProviderTree for provider: b5f91a62-c356-4895-a9c1-523d85f8751b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 07:56:28 np0005536586 nova_compute[247443]: 2025-11-26 12:56:28.052 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed for provider b5f91a62-c356-4895-a9c1-523d85f8751b based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 07:56:28 np0005536586 nova_compute[247443]: 2025-11-26 12:56:28.053 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 07:56:28 np0005536586 nova_compute[247443]: 2025-11-26 12:56:28.053 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:56:28 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:29 np0005536586 nova_compute[247443]: 2025-11-26 12:56:29.050 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:56:29 np0005536586 nova_compute[247443]: 2025-11-26 12:56:29.051 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:56:29 np0005536586 nova_compute[247443]: 2025-11-26 12:56:29.051 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:56:29 np0005536586 nova_compute[247443]: 2025-11-26 12:56:29.819 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:56:30 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:56:32 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:34 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:56:35
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'backups', '.rgw.root', 'vms']
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:56:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.077368) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161796077401, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1343, "num_deletes": 506, "total_data_size": 1616483, "memory_usage": 1646048, "flush_reason": "Manual Compaction"}
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161796082554, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1600915, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13494, "largest_seqno": 14836, "table_properties": {"data_size": 1594983, "index_size": 2752, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 14869, "raw_average_key_size": 18, "raw_value_size": 1581196, "raw_average_value_size": 1916, "num_data_blocks": 126, "num_entries": 825, "num_filter_entries": 825, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764161687, "oldest_key_time": 1764161687, "file_creation_time": 1764161796, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 5204 microseconds, and 3925 cpu microseconds.
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.082579) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1600915 bytes OK
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.082592) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.082959) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.082981) EVENT_LOG_v1 {"time_micros": 1764161796082966, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.082990) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1609408, prev total WAL file size 1609408, number of live WAL files 2.
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.083381) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1563KB)], [32(7390KB)]
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161796083409, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9168939, "oldest_snapshot_seqno": -1}
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3803 keys, 7140974 bytes, temperature: kUnknown
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161796098700, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7140974, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7113873, "index_size": 16495, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9541, "raw_key_size": 93229, "raw_average_key_size": 24, "raw_value_size": 7043351, "raw_average_value_size": 1852, "num_data_blocks": 699, "num_entries": 3803, "num_filter_entries": 3803, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160613, "oldest_key_time": 0, "file_creation_time": 1764161796, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.098861) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7140974 bytes
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.099184) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 598.2 rd, 465.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.2 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(10.2) write-amplify(4.5) OK, records in: 4828, records dropped: 1025 output_compression: NoCompression
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.099197) EVENT_LOG_v1 {"time_micros": 1764161796099191, "job": 14, "event": "compaction_finished", "compaction_time_micros": 15327, "compaction_time_cpu_micros": 13012, "output_level": 6, "num_output_files": 1, "total_output_size": 7140974, "num_input_records": 4828, "num_output_records": 3803, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161796099465, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161796100555, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.083325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.100588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.100590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.100591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.100592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.100593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 07:56:36 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:56:36 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev d8fe2dca-14a2-4ad6-9ea1-0e4f095e72e7 does not exist
Nov 26 07:56:36 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev b37a9705-c183-40d4-96ee-382034d27a03 does not exist
Nov 26 07:56:36 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 43071f8e-35eb-440b-b83a-770b1d3f8648 does not exist
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:56:36 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:56:37 np0005536586 podman[252123]: 2025-11-26 12:56:37.025699434 +0000 UTC m=+0.026178057 container create 6ab94b264c0568c0dbe5fffb9799c812b6ec5f190d29916168d6764753f03bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatterjee, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:56:37 np0005536586 systemd[1]: Started libpod-conmon-6ab94b264c0568c0dbe5fffb9799c812b6ec5f190d29916168d6764753f03bc9.scope.
Nov 26 07:56:37 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:56:37 np0005536586 podman[252123]: 2025-11-26 12:56:37.075285434 +0000 UTC m=+0.075764067 container init 6ab94b264c0568c0dbe5fffb9799c812b6ec5f190d29916168d6764753f03bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:56:37 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:56:37 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:56:37 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:56:37 np0005536586 podman[252123]: 2025-11-26 12:56:37.080277624 +0000 UTC m=+0.080756247 container start 6ab94b264c0568c0dbe5fffb9799c812b6ec5f190d29916168d6764753f03bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatterjee, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:56:37 np0005536586 podman[252123]: 2025-11-26 12:56:37.081446237 +0000 UTC m=+0.081924860 container attach 6ab94b264c0568c0dbe5fffb9799c812b6ec5f190d29916168d6764753f03bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:56:37 np0005536586 magical_chatterjee[252136]: 167 167
Nov 26 07:56:37 np0005536586 systemd[1]: libpod-6ab94b264c0568c0dbe5fffb9799c812b6ec5f190d29916168d6764753f03bc9.scope: Deactivated successfully.
Nov 26 07:56:37 np0005536586 podman[252123]: 2025-11-26 12:56:37.084667259 +0000 UTC m=+0.085145882 container died 6ab94b264c0568c0dbe5fffb9799c812b6ec5f190d29916168d6764753f03bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatterjee, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:56:37 np0005536586 systemd[1]: var-lib-containers-storage-overlay-65a8bb8e451cf84e9306be10d45ea1f4dc69d38d780be9d84cd75bb8644b31aa-merged.mount: Deactivated successfully.
Nov 26 07:56:37 np0005536586 podman[252123]: 2025-11-26 12:56:37.106967375 +0000 UTC m=+0.107445998 container remove 6ab94b264c0568c0dbe5fffb9799c812b6ec5f190d29916168d6764753f03bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatterjee, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:56:37 np0005536586 podman[252123]: 2025-11-26 12:56:37.015249696 +0000 UTC m=+0.015728319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:56:37 np0005536586 systemd[1]: libpod-conmon-6ab94b264c0568c0dbe5fffb9799c812b6ec5f190d29916168d6764753f03bc9.scope: Deactivated successfully.
Nov 26 07:56:37 np0005536586 podman[252158]: 2025-11-26 12:56:37.22499678 +0000 UTC m=+0.025812198 container create b01e500e4f11a6059ffb3cfdc6a893dc89784a959635409a874d92eeef60fecb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:56:37 np0005536586 systemd[1]: Started libpod-conmon-b01e500e4f11a6059ffb3cfdc6a893dc89784a959635409a874d92eeef60fecb.scope.
Nov 26 07:56:37 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:56:37 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66be5db121bcd0f7410c3987b71e7165a8243e825d4138bdfc5190c98c6e35e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:56:37 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66be5db121bcd0f7410c3987b71e7165a8243e825d4138bdfc5190c98c6e35e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:56:37 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66be5db121bcd0f7410c3987b71e7165a8243e825d4138bdfc5190c98c6e35e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:56:37 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66be5db121bcd0f7410c3987b71e7165a8243e825d4138bdfc5190c98c6e35e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:56:37 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66be5db121bcd0f7410c3987b71e7165a8243e825d4138bdfc5190c98c6e35e8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:56:37 np0005536586 podman[252158]: 2025-11-26 12:56:37.28794155 +0000 UTC m=+0.088756988 container init b01e500e4f11a6059ffb3cfdc6a893dc89784a959635409a874d92eeef60fecb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:56:37 np0005536586 podman[252158]: 2025-11-26 12:56:37.294159983 +0000 UTC m=+0.094975400 container start b01e500e4f11a6059ffb3cfdc6a893dc89784a959635409a874d92eeef60fecb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:56:37 np0005536586 podman[252158]: 2025-11-26 12:56:37.295228425 +0000 UTC m=+0.096043844 container attach b01e500e4f11a6059ffb3cfdc6a893dc89784a959635409a874d92eeef60fecb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:56:37 np0005536586 podman[252158]: 2025-11-26 12:56:37.214709688 +0000 UTC m=+0.015525127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:56:38 np0005536586 charming_turing[252171]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:56:38 np0005536586 charming_turing[252171]: --> relative data size: 1.0
Nov 26 07:56:38 np0005536586 charming_turing[252171]: --> All data devices are unavailable
Nov 26 07:56:38 np0005536586 systemd[1]: libpod-b01e500e4f11a6059ffb3cfdc6a893dc89784a959635409a874d92eeef60fecb.scope: Deactivated successfully.
Nov 26 07:56:38 np0005536586 podman[252158]: 2025-11-26 12:56:38.123409786 +0000 UTC m=+0.924225225 container died b01e500e4f11a6059ffb3cfdc6a893dc89784a959635409a874d92eeef60fecb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 07:56:38 np0005536586 systemd[1]: var-lib-containers-storage-overlay-66be5db121bcd0f7410c3987b71e7165a8243e825d4138bdfc5190c98c6e35e8-merged.mount: Deactivated successfully.
Nov 26 07:56:38 np0005536586 podman[252158]: 2025-11-26 12:56:38.157006336 +0000 UTC m=+0.957821754 container remove b01e500e4f11a6059ffb3cfdc6a893dc89784a959635409a874d92eeef60fecb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 07:56:38 np0005536586 systemd[1]: libpod-conmon-b01e500e4f11a6059ffb3cfdc6a893dc89784a959635409a874d92eeef60fecb.scope: Deactivated successfully.
Nov 26 07:56:38 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:38 np0005536586 podman[252340]: 2025-11-26 12:56:38.614624014 +0000 UTC m=+0.027708030 container create 209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mccarthy, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 07:56:38 np0005536586 systemd[1]: Started libpod-conmon-209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121.scope.
Nov 26 07:56:38 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:56:38 np0005536586 podman[252340]: 2025-11-26 12:56:38.675798819 +0000 UTC m=+0.088882855 container init 209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:56:38 np0005536586 podman[252340]: 2025-11-26 12:56:38.680914502 +0000 UTC m=+0.093998519 container start 209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 07:56:38 np0005536586 podman[252340]: 2025-11-26 12:56:38.682120606 +0000 UTC m=+0.095204622 container attach 209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 07:56:38 np0005536586 upbeat_mccarthy[252353]: 167 167
Nov 26 07:56:38 np0005536586 systemd[1]: libpod-209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121.scope: Deactivated successfully.
Nov 26 07:56:38 np0005536586 conmon[252353]: conmon 209ded90391c24872f6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121.scope/container/memory.events
Nov 26 07:56:38 np0005536586 podman[252340]: 2025-11-26 12:56:38.685259823 +0000 UTC m=+0.098343839 container died 209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mccarthy, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 07:56:38 np0005536586 systemd[1]: var-lib-containers-storage-overlay-6f2e1635d95e2542c9e9edcd4d3ff377d4d4a79948c5e4b97fc75d174f3288a0-merged.mount: Deactivated successfully.
Nov 26 07:56:38 np0005536586 podman[252340]: 2025-11-26 12:56:38.602738631 +0000 UTC m=+0.015822657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:56:38 np0005536586 podman[252340]: 2025-11-26 12:56:38.702203562 +0000 UTC m=+0.115287578 container remove 209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:56:38 np0005536586 systemd[1]: libpod-conmon-209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121.scope: Deactivated successfully.
Nov 26 07:56:38 np0005536586 podman[252374]: 2025-11-26 12:56:38.82862156 +0000 UTC m=+0.029458450 container create 25805e648c9ec7c1e8806c50eeb9d54b26fdb7d241b87ecc1c650c35a9d479dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wozniak, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 07:56:38 np0005536586 systemd[1]: Started libpod-conmon-25805e648c9ec7c1e8806c50eeb9d54b26fdb7d241b87ecc1c650c35a9d479dd.scope.
Nov 26 07:56:38 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:56:38 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c440c58c9481e72350c1cb56a27b2ab4952f072b960dbff2c3cdc3150f5bdae4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:56:38 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c440c58c9481e72350c1cb56a27b2ab4952f072b960dbff2c3cdc3150f5bdae4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:56:38 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c440c58c9481e72350c1cb56a27b2ab4952f072b960dbff2c3cdc3150f5bdae4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:56:38 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c440c58c9481e72350c1cb56a27b2ab4952f072b960dbff2c3cdc3150f5bdae4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:56:38 np0005536586 podman[252374]: 2025-11-26 12:56:38.890253826 +0000 UTC m=+0.091090717 container init 25805e648c9ec7c1e8806c50eeb9d54b26fdb7d241b87ecc1c650c35a9d479dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:56:38 np0005536586 podman[252374]: 2025-11-26 12:56:38.895289819 +0000 UTC m=+0.096126710 container start 25805e648c9ec7c1e8806c50eeb9d54b26fdb7d241b87ecc1c650c35a9d479dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:56:38 np0005536586 podman[252374]: 2025-11-26 12:56:38.896533254 +0000 UTC m=+0.097370144 container attach 25805e648c9ec7c1e8806c50eeb9d54b26fdb7d241b87ecc1c650c35a9d479dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wozniak, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 07:56:38 np0005536586 podman[252374]: 2025-11-26 12:56:38.817119047 +0000 UTC m=+0.017955939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]: {
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:    "0": [
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:        {
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "devices": [
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "/dev/loop3"
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            ],
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "lv_name": "ceph_lv0",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "lv_size": "21470642176",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "name": "ceph_lv0",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "tags": {
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.cluster_name": "ceph",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.crush_device_class": "",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.encrypted": "0",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.osd_id": "0",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.type": "block",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.vdo": "0"
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            },
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "type": "block",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "vg_name": "ceph_vg0"
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:        }
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:    ],
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:    "1": [
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:        {
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "devices": [
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "/dev/loop4"
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            ],
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "lv_name": "ceph_lv1",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "lv_size": "21470642176",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "name": "ceph_lv1",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "tags": {
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.cluster_name": "ceph",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.crush_device_class": "",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.encrypted": "0",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.osd_id": "1",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.type": "block",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.vdo": "0"
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            },
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "type": "block",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "vg_name": "ceph_vg1"
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:        }
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:    ],
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:    "2": [
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:        {
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "devices": [
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "/dev/loop5"
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            ],
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "lv_name": "ceph_lv2",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "lv_size": "21470642176",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "name": "ceph_lv2",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "tags": {
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.cluster_name": "ceph",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.crush_device_class": "",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.encrypted": "0",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.osd_id": "2",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.type": "block",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:                "ceph.vdo": "0"
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            },
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "type": "block",
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:            "vg_name": "ceph_vg2"
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:        }
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]:    ]
Nov 26 07:56:39 np0005536586 brave_wozniak[252387]: }
Nov 26 07:56:39 np0005536586 podman[252374]: 2025-11-26 12:56:39.539181274 +0000 UTC m=+0.740018165 container died 25805e648c9ec7c1e8806c50eeb9d54b26fdb7d241b87ecc1c650c35a9d479dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 07:56:39 np0005536586 systemd[1]: libpod-25805e648c9ec7c1e8806c50eeb9d54b26fdb7d241b87ecc1c650c35a9d479dd.scope: Deactivated successfully.
Nov 26 07:56:39 np0005536586 systemd[1]: var-lib-containers-storage-overlay-c440c58c9481e72350c1cb56a27b2ab4952f072b960dbff2c3cdc3150f5bdae4-merged.mount: Deactivated successfully.
Nov 26 07:56:39 np0005536586 podman[252374]: 2025-11-26 12:56:39.567801983 +0000 UTC m=+0.768638874 container remove 25805e648c9ec7c1e8806c50eeb9d54b26fdb7d241b87ecc1c650c35a9d479dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 07:56:39 np0005536586 systemd[1]: libpod-conmon-25805e648c9ec7c1e8806c50eeb9d54b26fdb7d241b87ecc1c650c35a9d479dd.scope: Deactivated successfully.
Nov 26 07:56:40 np0005536586 podman[252535]: 2025-11-26 12:56:40.014774125 +0000 UTC m=+0.029935099 container create 95fdc8888cb695c0fbcaa4913422ee4ea81764c8e9e50a53aaf62d036dafc8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:56:40 np0005536586 systemd[1]: Started libpod-conmon-95fdc8888cb695c0fbcaa4913422ee4ea81764c8e9e50a53aaf62d036dafc8c2.scope.
Nov 26 07:56:40 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:56:40 np0005536586 podman[252535]: 2025-11-26 12:56:40.070590449 +0000 UTC m=+0.085751434 container init 95fdc8888cb695c0fbcaa4913422ee4ea81764c8e9e50a53aaf62d036dafc8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:56:40 np0005536586 podman[252535]: 2025-11-26 12:56:40.075827812 +0000 UTC m=+0.090988806 container start 95fdc8888cb695c0fbcaa4913422ee4ea81764c8e9e50a53aaf62d036dafc8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:56:40 np0005536586 podman[252535]: 2025-11-26 12:56:40.076910031 +0000 UTC m=+0.092071016 container attach 95fdc8888cb695c0fbcaa4913422ee4ea81764c8e9e50a53aaf62d036dafc8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:56:40 np0005536586 wizardly_jackson[252548]: 167 167
Nov 26 07:56:40 np0005536586 systemd[1]: libpod-95fdc8888cb695c0fbcaa4913422ee4ea81764c8e9e50a53aaf62d036dafc8c2.scope: Deactivated successfully.
Nov 26 07:56:40 np0005536586 podman[252535]: 2025-11-26 12:56:40.078542378 +0000 UTC m=+0.093703362 container died 95fdc8888cb695c0fbcaa4913422ee4ea81764c8e9e50a53aaf62d036dafc8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 07:56:40 np0005536586 systemd[1]: var-lib-containers-storage-overlay-9731aad20ea468440349a9f869edf36a30c84c9beae948fa6e5b03ab385f3427-merged.mount: Deactivated successfully.
Nov 26 07:56:40 np0005536586 podman[252535]: 2025-11-26 12:56:40.09795958 +0000 UTC m=+0.113120564 container remove 95fdc8888cb695c0fbcaa4913422ee4ea81764c8e9e50a53aaf62d036dafc8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 07:56:40 np0005536586 podman[252535]: 2025-11-26 12:56:40.001985719 +0000 UTC m=+0.017146703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:56:40 np0005536586 systemd[1]: libpod-conmon-95fdc8888cb695c0fbcaa4913422ee4ea81764c8e9e50a53aaf62d036dafc8c2.scope: Deactivated successfully.
Nov 26 07:56:40 np0005536586 podman[252570]: 2025-11-26 12:56:40.219025097 +0000 UTC m=+0.028175672 container create 0cf2bc76b94a92ecbc5c8bf9572dbb36e928f252a5cc1c18b10e9100be01e36e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_sutherland, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 07:56:40 np0005536586 systemd[1]: Started libpod-conmon-0cf2bc76b94a92ecbc5c8bf9572dbb36e928f252a5cc1c18b10e9100be01e36e.scope.
Nov 26 07:56:40 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:56:40 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cee4566f32c048a3d574aab74cccbb29442d04bb31e393c99912ac2cc2332c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:56:40 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cee4566f32c048a3d574aab74cccbb29442d04bb31e393c99912ac2cc2332c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:56:40 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cee4566f32c048a3d574aab74cccbb29442d04bb31e393c99912ac2cc2332c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:56:40 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cee4566f32c048a3d574aab74cccbb29442d04bb31e393c99912ac2cc2332c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:56:40 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:40 np0005536586 podman[252570]: 2025-11-26 12:56:40.275840024 +0000 UTC m=+0.084990608 container init 0cf2bc76b94a92ecbc5c8bf9572dbb36e928f252a5cc1c18b10e9100be01e36e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 07:56:40 np0005536586 podman[252570]: 2025-11-26 12:56:40.280938434 +0000 UTC m=+0.090088999 container start 0cf2bc76b94a92ecbc5c8bf9572dbb36e928f252a5cc1c18b10e9100be01e36e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:56:40 np0005536586 podman[252570]: 2025-11-26 12:56:40.28223558 +0000 UTC m=+0.091386143 container attach 0cf2bc76b94a92ecbc5c8bf9572dbb36e928f252a5cc1c18b10e9100be01e36e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:56:40 np0005536586 podman[252570]: 2025-11-26 12:56:40.207600923 +0000 UTC m=+0.016751487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]: {
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:        "osd_id": 1,
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:        "type": "bluestore"
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:    },
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:        "osd_id": 2,
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:        "type": "bluestore"
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:    },
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:        "osd_id": 0,
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:        "type": "bluestore"
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]:    }
Nov 26 07:56:41 np0005536586 quizzical_sutherland[252583]: }
Nov 26 07:56:41 np0005536586 systemd[1]: libpod-0cf2bc76b94a92ecbc5c8bf9572dbb36e928f252a5cc1c18b10e9100be01e36e.scope: Deactivated successfully.
Nov 26 07:56:41 np0005536586 podman[252570]: 2025-11-26 12:56:41.068519473 +0000 UTC m=+0.877670037 container died 0cf2bc76b94a92ecbc5c8bf9572dbb36e928f252a5cc1c18b10e9100be01e36e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:56:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:56:41 np0005536586 systemd[1]: var-lib-containers-storage-overlay-2cee4566f32c048a3d574aab74cccbb29442d04bb31e393c99912ac2cc2332c6-merged.mount: Deactivated successfully.
Nov 26 07:56:41 np0005536586 podman[252570]: 2025-11-26 12:56:41.101688257 +0000 UTC m=+0.910838821 container remove 0cf2bc76b94a92ecbc5c8bf9572dbb36e928f252a5cc1c18b10e9100be01e36e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 07:56:41 np0005536586 systemd[1]: libpod-conmon-0cf2bc76b94a92ecbc5c8bf9572dbb36e928f252a5cc1c18b10e9100be01e36e.scope: Deactivated successfully.
Nov 26 07:56:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:56:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:56:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:56:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:56:41 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 1183bdc7-0adc-4402-954b-5bb66cbec355 does not exist
Nov 26 07:56:41 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 9320b0de-420f-4965-9cbc-1481d047ab2e does not exist
Nov 26 07:56:41 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:56:41 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:56:42 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:44 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:56:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 07:56:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:56:46 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:48 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:49 np0005536586 podman[252675]: 2025-11-26 12:56:49.881382639 +0000 UTC m=+0.043675129 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 26 07:56:49 np0005536586 podman[252676]: 2025-11-26 12:56:49.886418491 +0000 UTC m=+0.048212631 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 26 07:56:50 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:56:52 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:54 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:54 np0005536586 podman[252708]: 2025-11-26 12:56:54.893385997 +0000 UTC m=+0.058089780 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 26 07:56:55 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 07:56:55 np0005536586 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3318 writes, 14K keys, 3318 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3318 writes, 3318 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1304 writes, 5915 keys, 1304 commit groups, 1.0 writes per commit group, ingest: 8.56 MB, 0.01 MB/s#012Interval WAL: 1304 writes, 1304 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    343.4      0.05              0.04         7    0.007       0      0       0.0       0.0#012  L6      1/0    6.81 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    530.7    434.5      0.10              0.08         6    0.016     24K   3203       0.0       0.0#012 Sum      1/0    6.81 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.6    356.6    404.6      0.14              0.12        13    0.011     24K   3203       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    377.0    380.0      0.09              0.08         8    0.012     17K   2474       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    530.7    434.5      0.10              0.08         6    0.016     24K   3203       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    350.2      0.05              0.04         6    0.008       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     48.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.016, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.1 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560bd0e9b1f0#2 capacity: 308.00 MB usage: 1.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(99,1.23 MB,0.398695%) FilterBlock(14,75.55 KB,0.0239533%) IndexBlock(14,149.28 KB,0.047332%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 26 07:56:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:56:56 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:56:58 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:00 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:57:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:57:01.730 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:57:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:57:01.730 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:57:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:57:01.731 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:57:02 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:04 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:57:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:57:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:57:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:57:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:57:05 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:57:06 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:57:06 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:08 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:10 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:11 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:57:12 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:14 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:16 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:57:16 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:18 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 07:57:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3537055593' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 07:57:18 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 07:57:18 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3537055593' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 07:57:20 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:20 np0005536586 podman[252732]: 2025-11-26 12:57:20.892829139 +0000 UTC m=+0.050170283 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 07:57:20 np0005536586 podman[252731]: 2025-11-26 12:57:20.914599956 +0000 UTC m=+0.072950903 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 26 07:57:21 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:57:22 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:24 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:24 np0005536586 nova_compute[247443]: 2025-11-26 12:57:24.820 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:57:25 np0005536586 nova_compute[247443]: 2025-11-26 12:57:25.820 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:57:25 np0005536586 nova_compute[247443]: 2025-11-26 12:57:25.820 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 07:57:25 np0005536586 nova_compute[247443]: 2025-11-26 12:57:25.820 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 07:57:25 np0005536586 nova_compute[247443]: 2025-11-26 12:57:25.834 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 07:57:25 np0005536586 podman[252764]: 2025-11-26 12:57:25.910643861 +0000 UTC m=+0.063586070 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 26 07:57:26 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:57:26 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:27 np0005536586 nova_compute[247443]: 2025-11-26 12:57:27.819 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:57:27 np0005536586 nova_compute[247443]: 2025-11-26 12:57:27.820 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:57:27 np0005536586 nova_compute[247443]: 2025-11-26 12:57:27.842 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:57:27 np0005536586 nova_compute[247443]: 2025-11-26 12:57:27.842 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:57:27 np0005536586 nova_compute[247443]: 2025-11-26 12:57:27.843 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:57:27 np0005536586 nova_compute[247443]: 2025-11-26 12:57:27.843 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 07:57:27 np0005536586 nova_compute[247443]: 2025-11-26 12:57:27.843 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 07:57:28 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 07:57:28 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/41768250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 07:57:28 np0005536586 nova_compute[247443]: 2025-11-26 12:57:28.202 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 07:57:28 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:28 np0005536586 nova_compute[247443]: 2025-11-26 12:57:28.429 247447 WARNING nova.virt.libvirt.driver [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 07:57:28 np0005536586 nova_compute[247443]: 2025-11-26 12:57:28.430 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5223MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 07:57:28 np0005536586 nova_compute[247443]: 2025-11-26 12:57:28.431 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:57:28 np0005536586 nova_compute[247443]: 2025-11-26 12:57:28.431 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:57:28 np0005536586 nova_compute[247443]: 2025-11-26 12:57:28.478 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 07:57:28 np0005536586 nova_compute[247443]: 2025-11-26 12:57:28.478 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 07:57:28 np0005536586 nova_compute[247443]: 2025-11-26 12:57:28.490 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 07:57:28 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 07:57:28 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1743192530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 07:57:28 np0005536586 nova_compute[247443]: 2025-11-26 12:57:28.837 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 07:57:28 np0005536586 nova_compute[247443]: 2025-11-26 12:57:28.842 247447 DEBUG nova.compute.provider_tree [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed in ProviderTree for provider: b5f91a62-c356-4895-a9c1-523d85f8751b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 07:57:28 np0005536586 nova_compute[247443]: 2025-11-26 12:57:28.853 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed for provider b5f91a62-c356-4895-a9c1-523d85f8751b based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 07:57:28 np0005536586 nova_compute[247443]: 2025-11-26 12:57:28.854 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 07:57:28 np0005536586 nova_compute[247443]: 2025-11-26 12:57:28.855 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:57:29 np0005536586 nova_compute[247443]: 2025-11-26 12:57:29.850 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:57:29 np0005536586 nova_compute[247443]: 2025-11-26 12:57:29.850 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:57:29 np0005536586 nova_compute[247443]: 2025-11-26 12:57:29.864 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:57:29 np0005536586 nova_compute[247443]: 2025-11-26 12:57:29.864 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:57:29 np0005536586 nova_compute[247443]: 2025-11-26 12:57:29.864 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 07:57:30 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:30 np0005536586 nova_compute[247443]: 2025-11-26 12:57:30.819 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:57:30 np0005536586 nova_compute[247443]: 2025-11-26 12:57:30.820 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 07:57:31 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:57:32 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:34 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:57:35
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'backups', '.mgr', 'vms', 'default.rgw.control']
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:57:35 np0005536586 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 07:57:36 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:57:36 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:38 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:40 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:57:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:57:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:57:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 07:57:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:57:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 07:57:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:57:41 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev a62ce6f9-dde9-44b4-a9b3-66f980949d00 does not exist
Nov 26 07:57:41 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 496e8c7f-bc3b-414a-be76-71b83801c08f does not exist
Nov 26 07:57:41 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev daea28d9-4964-4a72-837e-75a1fb2e70d3 does not exist
Nov 26 07:57:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 07:57:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 07:57:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 07:57:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:57:41 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:57:41 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:57:42 np0005536586 podman[253091]: 2025-11-26 12:57:42.227824019 +0000 UTC m=+0.029510228 container create 9cb9b02f7a812bf77353e49668485370256c7a44f64f2c8715ecc4ef9545315d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:57:42 np0005536586 systemd[1]: Started libpod-conmon-9cb9b02f7a812bf77353e49668485370256c7a44f64f2c8715ecc4ef9545315d.scope.
Nov 26 07:57:42 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:57:42 np0005536586 podman[253091]: 2025-11-26 12:57:42.292533666 +0000 UTC m=+0.094219885 container init 9cb9b02f7a812bf77353e49668485370256c7a44f64f2c8715ecc4ef9545315d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 07:57:42 np0005536586 podman[253091]: 2025-11-26 12:57:42.297881246 +0000 UTC m=+0.099567455 container start 9cb9b02f7a812bf77353e49668485370256c7a44f64f2c8715ecc4ef9545315d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 07:57:42 np0005536586 podman[253091]: 2025-11-26 12:57:42.299117367 +0000 UTC m=+0.100803576 container attach 9cb9b02f7a812bf77353e49668485370256c7a44f64f2c8715ecc4ef9545315d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:57:42 np0005536586 goofy_cerf[253104]: 167 167
Nov 26 07:57:42 np0005536586 systemd[1]: libpod-9cb9b02f7a812bf77353e49668485370256c7a44f64f2c8715ecc4ef9545315d.scope: Deactivated successfully.
Nov 26 07:57:42 np0005536586 podman[253091]: 2025-11-26 12:57:42.302910326 +0000 UTC m=+0.104596615 container died 9cb9b02f7a812bf77353e49668485370256c7a44f64f2c8715ecc4ef9545315d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 07:57:42 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:42 np0005536586 podman[253091]: 2025-11-26 12:57:42.216152257 +0000 UTC m=+0.017838486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:57:42 np0005536586 systemd[1]: var-lib-containers-storage-overlay-62bc4aaba12a7ed4ab637fb801dc10fc39ee0b361d76ea369ee32c5ddf33b07e-merged.mount: Deactivated successfully.
Nov 26 07:57:42 np0005536586 podman[253091]: 2025-11-26 12:57:42.328221878 +0000 UTC m=+0.129908087 container remove 9cb9b02f7a812bf77353e49668485370256c7a44f64f2c8715ecc4ef9545315d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:57:42 np0005536586 systemd[1]: libpod-conmon-9cb9b02f7a812bf77353e49668485370256c7a44f64f2c8715ecc4ef9545315d.scope: Deactivated successfully.
Nov 26 07:57:42 np0005536586 podman[253125]: 2025-11-26 12:57:42.461277659 +0000 UTC m=+0.039416080 container create a2bf487670d125733f02add188bf51fa2d9dc661f748aedcf305c250d3451600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 07:57:42 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 07:57:42 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:57:42 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 07:57:42 np0005536586 systemd[1]: Started libpod-conmon-a2bf487670d125733f02add188bf51fa2d9dc661f748aedcf305c250d3451600.scope.
Nov 26 07:57:42 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:57:42 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9ce25852059e86d0588da0bee4bd34edf864d5e0e986c49cb93ea781783fa3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:57:42 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9ce25852059e86d0588da0bee4bd34edf864d5e0e986c49cb93ea781783fa3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:57:42 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9ce25852059e86d0588da0bee4bd34edf864d5e0e986c49cb93ea781783fa3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:57:42 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9ce25852059e86d0588da0bee4bd34edf864d5e0e986c49cb93ea781783fa3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:57:42 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9ce25852059e86d0588da0bee4bd34edf864d5e0e986c49cb93ea781783fa3f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 07:57:42 np0005536586 podman[253125]: 2025-11-26 12:57:42.531842162 +0000 UTC m=+0.109980593 container init a2bf487670d125733f02add188bf51fa2d9dc661f748aedcf305c250d3451600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:57:42 np0005536586 podman[253125]: 2025-11-26 12:57:42.538012813 +0000 UTC m=+0.116151235 container start a2bf487670d125733f02add188bf51fa2d9dc661f748aedcf305c250d3451600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 07:57:42 np0005536586 podman[253125]: 2025-11-26 12:57:42.539112166 +0000 UTC m=+0.117250587 container attach a2bf487670d125733f02add188bf51fa2d9dc661f748aedcf305c250d3451600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 07:57:42 np0005536586 podman[253125]: 2025-11-26 12:57:42.446086754 +0000 UTC m=+0.024225185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:57:43 np0005536586 lucid_raman[253139]: --> passed data devices: 0 physical, 3 LVM
Nov 26 07:57:43 np0005536586 lucid_raman[253139]: --> relative data size: 1.0
Nov 26 07:57:43 np0005536586 lucid_raman[253139]: --> All data devices are unavailable
Nov 26 07:57:43 np0005536586 systemd[1]: libpod-a2bf487670d125733f02add188bf51fa2d9dc661f748aedcf305c250d3451600.scope: Deactivated successfully.
Nov 26 07:57:43 np0005536586 podman[253168]: 2025-11-26 12:57:43.468155302 +0000 UTC m=+0.017941391 container died a2bf487670d125733f02add188bf51fa2d9dc661f748aedcf305c250d3451600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 07:57:43 np0005536586 systemd[1]: var-lib-containers-storage-overlay-a9ce25852059e86d0588da0bee4bd34edf864d5e0e986c49cb93ea781783fa3f-merged.mount: Deactivated successfully.
Nov 26 07:57:43 np0005536586 podman[253168]: 2025-11-26 12:57:43.504912629 +0000 UTC m=+0.054698697 container remove a2bf487670d125733f02add188bf51fa2d9dc661f748aedcf305c250d3451600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:57:43 np0005536586 systemd[1]: libpod-conmon-a2bf487670d125733f02add188bf51fa2d9dc661f748aedcf305c250d3451600.scope: Deactivated successfully.
Nov 26 07:57:43 np0005536586 podman[253310]: 2025-11-26 12:57:43.934718833 +0000 UTC m=+0.026312821 container create d57919da81662f6ef16d94dbc1a5aa9752bae52cc3f43af906786d3a766a01a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 07:57:43 np0005536586 systemd[1]: Started libpod-conmon-d57919da81662f6ef16d94dbc1a5aa9752bae52cc3f43af906786d3a766a01a8.scope.
Nov 26 07:57:43 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:57:43 np0005536586 podman[253310]: 2025-11-26 12:57:43.98905649 +0000 UTC m=+0.080650497 container init d57919da81662f6ef16d94dbc1a5aa9752bae52cc3f43af906786d3a766a01a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_goldstine, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 07:57:43 np0005536586 podman[253310]: 2025-11-26 12:57:43.994284514 +0000 UTC m=+0.085878503 container start d57919da81662f6ef16d94dbc1a5aa9752bae52cc3f43af906786d3a766a01a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 07:57:43 np0005536586 podman[253310]: 2025-11-26 12:57:43.995454219 +0000 UTC m=+0.087048206 container attach d57919da81662f6ef16d94dbc1a5aa9752bae52cc3f43af906786d3a766a01a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:57:43 np0005536586 great_goldstine[253323]: 167 167
Nov 26 07:57:43 np0005536586 systemd[1]: libpod-d57919da81662f6ef16d94dbc1a5aa9752bae52cc3f43af906786d3a766a01a8.scope: Deactivated successfully.
Nov 26 07:57:43 np0005536586 podman[253310]: 2025-11-26 12:57:43.999223234 +0000 UTC m=+0.090817222 container died d57919da81662f6ef16d94dbc1a5aa9752bae52cc3f43af906786d3a766a01a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_goldstine, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:57:44 np0005536586 systemd[1]: var-lib-containers-storage-overlay-6fd16bc8da6851179e9e90e0e2450eb2208ee52d819ecc4b2243b0e008d31bcf-merged.mount: Deactivated successfully.
Nov 26 07:57:44 np0005536586 podman[253310]: 2025-11-26 12:57:44.018185919 +0000 UTC m=+0.109779907 container remove d57919da81662f6ef16d94dbc1a5aa9752bae52cc3f43af906786d3a766a01a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_goldstine, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 07:57:44 np0005536586 podman[253310]: 2025-11-26 12:57:43.924167915 +0000 UTC m=+0.015761893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:57:44 np0005536586 systemd[1]: libpod-conmon-d57919da81662f6ef16d94dbc1a5aa9752bae52cc3f43af906786d3a766a01a8.scope: Deactivated successfully.
Nov 26 07:57:44 np0005536586 podman[253345]: 2025-11-26 12:57:44.144099996 +0000 UTC m=+0.029462398 container create 15f5387ee47a95cdb46f1668492198a646b8a22e68eafa37756a25153472ceb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 07:57:44 np0005536586 systemd[1]: Started libpod-conmon-15f5387ee47a95cdb46f1668492198a646b8a22e68eafa37756a25153472ceb2.scope.
Nov 26 07:57:44 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:57:44 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82874c3e83add2cd5cc698f6adebfd8851a6935c7de5f776860c04f9b4d661a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:57:44 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82874c3e83add2cd5cc698f6adebfd8851a6935c7de5f776860c04f9b4d661a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:57:44 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82874c3e83add2cd5cc698f6adebfd8851a6935c7de5f776860c04f9b4d661a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:57:44 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82874c3e83add2cd5cc698f6adebfd8851a6935c7de5f776860c04f9b4d661a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:57:44 np0005536586 podman[253345]: 2025-11-26 12:57:44.211485798 +0000 UTC m=+0.096848210 container init 15f5387ee47a95cdb46f1668492198a646b8a22e68eafa37756a25153472ceb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_varahamihira, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 07:57:44 np0005536586 podman[253345]: 2025-11-26 12:57:44.217404986 +0000 UTC m=+0.102767388 container start 15f5387ee47a95cdb46f1668492198a646b8a22e68eafa37756a25153472ceb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:57:44 np0005536586 podman[253345]: 2025-11-26 12:57:44.218316935 +0000 UTC m=+0.103679337 container attach 15f5387ee47a95cdb46f1668492198a646b8a22e68eafa37756a25153472ceb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 07:57:44 np0005536586 podman[253345]: 2025-11-26 12:57:44.133487481 +0000 UTC m=+0.018849904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:57:44 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]: {
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:    "0": [
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:        {
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "devices": [
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "/dev/loop3"
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            ],
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "lv_name": "ceph_lv0",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "lv_size": "21470642176",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "name": "ceph_lv0",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "tags": {
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.cluster_name": "ceph",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.crush_device_class": "",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.encrypted": "0",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.osd_id": "0",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.type": "block",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.vdo": "0"
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            },
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "type": "block",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "vg_name": "ceph_vg0"
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:        }
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:    ],
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:    "1": [
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:        {
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "devices": [
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "/dev/loop4"
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            ],
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "lv_name": "ceph_lv1",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "lv_size": "21470642176",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "name": "ceph_lv1",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "tags": {
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.cluster_name": "ceph",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.crush_device_class": "",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.encrypted": "0",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.osd_id": "1",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.type": "block",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.vdo": "0"
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            },
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "type": "block",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "vg_name": "ceph_vg1"
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:        }
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:    ],
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:    "2": [
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:        {
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "devices": [
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "/dev/loop5"
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            ],
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "lv_name": "ceph_lv2",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "lv_size": "21470642176",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "name": "ceph_lv2",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "tags": {
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.cephx_lockbox_secret": "",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.cluster_name": "ceph",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.crush_device_class": "",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.encrypted": "0",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.osd_id": "2",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.type": "block",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:                "ceph.vdo": "0"
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            },
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "type": "block",
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:            "vg_name": "ceph_vg2"
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:        }
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]:    ]
Nov 26 07:57:44 np0005536586 zealous_varahamihira[253358]: }
Nov 26 07:57:44 np0005536586 systemd[1]: libpod-15f5387ee47a95cdb46f1668492198a646b8a22e68eafa37756a25153472ceb2.scope: Deactivated successfully.
Nov 26 07:57:44 np0005536586 podman[253345]: 2025-11-26 12:57:44.867320295 +0000 UTC m=+0.752682707 container died 15f5387ee47a95cdb46f1668492198a646b8a22e68eafa37756a25153472ceb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_varahamihira, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 07:57:44 np0005536586 systemd[1]: var-lib-containers-storage-overlay-82874c3e83add2cd5cc698f6adebfd8851a6935c7de5f776860c04f9b4d661a1-merged.mount: Deactivated successfully.
Nov 26 07:57:44 np0005536586 podman[253345]: 2025-11-26 12:57:44.899649917 +0000 UTC m=+0.785012319 container remove 15f5387ee47a95cdb46f1668492198a646b8a22e68eafa37756a25153472ceb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_varahamihira, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 07:57:44 np0005536586 systemd[1]: libpod-conmon-15f5387ee47a95cdb46f1668492198a646b8a22e68eafa37756a25153472ceb2.scope: Deactivated successfully.
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 07:57:45 np0005536586 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 07:57:45 np0005536586 podman[253509]: 2025-11-26 12:57:45.35898348 +0000 UTC m=+0.030641581 container create c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ishizaka, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 07:57:45 np0005536586 systemd[1]: Started libpod-conmon-c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33.scope.
Nov 26 07:57:45 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:57:45 np0005536586 podman[253509]: 2025-11-26 12:57:45.419316467 +0000 UTC m=+0.090974578 container init c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 07:57:45 np0005536586 podman[253509]: 2025-11-26 12:57:45.424680508 +0000 UTC m=+0.096338609 container start c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ishizaka, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 07:57:45 np0005536586 podman[253509]: 2025-11-26 12:57:45.425935624 +0000 UTC m=+0.097593746 container attach c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 07:57:45 np0005536586 jolly_ishizaka[253522]: 167 167
Nov 26 07:57:45 np0005536586 systemd[1]: libpod-c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33.scope: Deactivated successfully.
Nov 26 07:57:45 np0005536586 conmon[253522]: conmon c2d5bdf53f9baf23639c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33.scope/container/memory.events
Nov 26 07:57:45 np0005536586 podman[253509]: 2025-11-26 12:57:45.346690126 +0000 UTC m=+0.018348248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:57:45 np0005536586 podman[253527]: 2025-11-26 12:57:45.459371461 +0000 UTC m=+0.019723590 container died c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ishizaka, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:57:45 np0005536586 systemd[1]: var-lib-containers-storage-overlay-c32b38a24f96f30cba06af8e2834c24412aa0934413a1785b83710ef9d9d6a3b-merged.mount: Deactivated successfully.
Nov 26 07:57:45 np0005536586 podman[253527]: 2025-11-26 12:57:45.476390732 +0000 UTC m=+0.036742831 container remove c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 07:57:45 np0005536586 systemd[1]: libpod-conmon-c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33.scope: Deactivated successfully.
Nov 26 07:57:45 np0005536586 podman[253545]: 2025-11-26 12:57:45.604863653 +0000 UTC m=+0.028815428 container create 66a7c3a46c4ed2441e33a2ffccd2c408ac9965de068f92252013c070e060ac5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mestorf, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 07:57:45 np0005536586 systemd[1]: Started libpod-conmon-66a7c3a46c4ed2441e33a2ffccd2c408ac9965de068f92252013c070e060ac5c.scope.
Nov 26 07:57:45 np0005536586 systemd[1]: Started libcrun container.
Nov 26 07:57:45 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a36b7f0788a5fa30d4289f823aa6d81c47208d89779505fafd6568cb82549b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 07:57:45 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a36b7f0788a5fa30d4289f823aa6d81c47208d89779505fafd6568cb82549b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 07:57:45 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a36b7f0788a5fa30d4289f823aa6d81c47208d89779505fafd6568cb82549b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 07:57:45 np0005536586 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a36b7f0788a5fa30d4289f823aa6d81c47208d89779505fafd6568cb82549b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 07:57:45 np0005536586 podman[253545]: 2025-11-26 12:57:45.674772892 +0000 UTC m=+0.098724677 container init 66a7c3a46c4ed2441e33a2ffccd2c408ac9965de068f92252013c070e060ac5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mestorf, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:57:45 np0005536586 podman[253545]: 2025-11-26 12:57:45.679710669 +0000 UTC m=+0.103662435 container start 66a7c3a46c4ed2441e33a2ffccd2c408ac9965de068f92252013c070e060ac5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:57:45 np0005536586 podman[253545]: 2025-11-26 12:57:45.680905362 +0000 UTC m=+0.104857127 container attach 66a7c3a46c4ed2441e33a2ffccd2c408ac9965de068f92252013c070e060ac5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mestorf, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 07:57:45 np0005536586 podman[253545]: 2025-11-26 12:57:45.593088747 +0000 UTC m=+0.017040533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 07:57:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:57:46 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]: {
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:    "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:        "osd_id": 1,
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:        "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:        "type": "bluestore"
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:    },
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:    "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:        "osd_id": 2,
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:        "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:        "type": "bluestore"
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:    },
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:    "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:        "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:        "osd_id": 0,
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:        "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:        "type": "bluestore"
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]:    }
Nov 26 07:57:46 np0005536586 charming_mestorf[253558]: }
Nov 26 07:57:46 np0005536586 systemd[1]: libpod-66a7c3a46c4ed2441e33a2ffccd2c408ac9965de068f92252013c070e060ac5c.scope: Deactivated successfully.
Nov 26 07:57:46 np0005536586 podman[253545]: 2025-11-26 12:57:46.455149531 +0000 UTC m=+0.879101296 container died 66a7c3a46c4ed2441e33a2ffccd2c408ac9965de068f92252013c070e060ac5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mestorf, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 07:57:46 np0005536586 systemd[1]: var-lib-containers-storage-overlay-20a36b7f0788a5fa30d4289f823aa6d81c47208d89779505fafd6568cb82549b-merged.mount: Deactivated successfully.
Nov 26 07:57:46 np0005536586 podman[253545]: 2025-11-26 12:57:46.489187161 +0000 UTC m=+0.913138937 container remove 66a7c3a46c4ed2441e33a2ffccd2c408ac9965de068f92252013c070e060ac5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 07:57:46 np0005536586 systemd[1]: libpod-conmon-66a7c3a46c4ed2441e33a2ffccd2c408ac9965de068f92252013c070e060ac5c.scope: Deactivated successfully.
Nov 26 07:57:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 07:57:46 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:57:46 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 07:57:46 np0005536586 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:57:46 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 64c196d9-f8f4-41f3-8a3d-9d55f49328b1 does not exist
Nov 26 07:57:46 np0005536586 ceph-mgr[75236]: [progress WARNING root] complete: ev 1484eeaf-69e9-4dd4-b65e-81924193db24 does not exist
Nov 26 07:57:47 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:57:47 np0005536586 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 07:57:48 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:49 np0005536586 systemd-logind[777]: New session 51 of user zuul.
Nov 26 07:57:49 np0005536586 systemd[1]: Started Session 51 of User zuul.
Nov 26 07:57:50 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:51 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:57:51 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14395 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:57:51 np0005536586 podman[253842]: 2025-11-26 12:57:51.885328119 +0000 UTC m=+0.047446629 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, config_id=multipathd, managed_by=edpm_ansible)
Nov 26 07:57:51 np0005536586 podman[253841]: 2025-11-26 12:57:51.90736053 +0000 UTC m=+0.069673135 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 07:57:51 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14397 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:57:52 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 26 07:57:52 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/678039320' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 26 07:57:52 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:54 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:56 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:57:56 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:56 np0005536586 podman[253955]: 2025-11-26 12:57:56.893296151 +0000 UTC m=+0.056308863 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 07:57:58 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:57:58 np0005536586 ovs-vsctl[254024]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 26 07:57:59 np0005536586 virtqemud[247331]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 26 07:57:59 np0005536586 virtqemud[247331]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 26 07:57:59 np0005536586 virtqemud[247331]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 26 07:58:00 np0005536586 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: cache status {prefix=cache status} (starting...)
Nov 26 07:58:00 np0005536586 lvm[254326]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 07:58:00 np0005536586 lvm[254326]: VG ceph_vg2 finished
Nov 26 07:58:00 np0005536586 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: client ls {prefix=client ls} (starting...)
Nov 26 07:58:00 np0005536586 lvm[254338]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 07:58:00 np0005536586 lvm[254338]: VG ceph_vg1 finished
Nov 26 07:58:00 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:58:00 np0005536586 lvm[254378]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 07:58:00 np0005536586 lvm[254378]: VG ceph_vg0 finished
Nov 26 07:58:00 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14401 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:58:00 np0005536586 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: damage ls {prefix=damage ls} (starting...)
Nov 26 07:58:00 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14403 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:58:00 np0005536586 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: dump loads {prefix=dump loads} (starting...)
Nov 26 07:58:01 np0005536586 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 26 07:58:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 07:58:01 np0005536586 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 26 07:58:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 26 07:58:01 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/801038007' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 26 07:58:01 np0005536586 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 26 07:58:01 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14409 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:58:01 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:58:01.445+0000 7f35d37a6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 07:58:01 np0005536586 ceph-mgr[75236]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 07:58:01 np0005536586 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 26 07:58:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 07:58:01 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1225205702' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 07:58:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:58:01.731 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 07:58:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:58:01.732 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 07:58:01 np0005536586 ovn_metadata_agent[159048]: 2025-11-26 12:58:01.732 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 07:58:01 np0005536586 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 26 07:58:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 26 07:58:01 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4292323180' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 26 07:58:01 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 26 07:58:01 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/670901816' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 26 07:58:01 np0005536586 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 26 07:58:02 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 26 07:58:02 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3568460546' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 26 07:58:02 np0005536586 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: ops {prefix=ops} (starting...)
Nov 26 07:58:02 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 26 07:58:02 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4159389430' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 26 07:58:02 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:58:02 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 26 07:58:02 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2137303050' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 26 07:58:02 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14423 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:58:02 np0005536586 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: session ls {prefix=session ls} (starting...)
Nov 26 07:58:02 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 26 07:58:02 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2442166289' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 26 07:58:02 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14427 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:58:02 np0005536586 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: status {prefix=status} (starting...)
Nov 26 07:58:03 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 07:58:03 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1702644273' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 07:58:03 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 26 07:58:03 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1210058228' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 26 07:58:03 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 26 07:58:03 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/962523951' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 26 07:58:03 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 26 07:58:03 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3713095360' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 26 07:58:03 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 26 07:58:03 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/333546072' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 26 07:58:03 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14439 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:58:03 np0005536586 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:58:03.874+0000 7f35d37a6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 26 07:58:03 np0005536586 ceph-mgr[75236]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 26 07:58:04 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14441 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:58:04 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 26 07:58:04 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/169523798' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 26 07:58:04 np0005536586 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 07:58:04 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14445 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:58:04 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 26 07:58:04 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2978914446' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 26 07:58:04 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14449 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:58:04 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 26 07:58:04 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/490509282' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 26 07:58:05 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14453 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:58:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 26 07:58:05 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2338340369' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 26 07:58:05 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14457 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:58:05 np0005536586 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 26 07:58:05 np0005536586 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162174485' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 26 07:58:05 np0005536586 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14461 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 44 ms_handle_reset con 0x5640f203e000 session 0x5640f1f93860
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 57737216 unmapped: 2056192 heap: 59793408 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 331679 data_alloc: 218103808 data_used: 36864
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 57737216 unmapped: 2056192 heap: 59793408 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 44 handle_osd_map epochs [45,46], i have 44, src has [1,46]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 57802752 unmapped: 1990656 heap: 59793408 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 57802752 unmapped: 1990656 heap: 59793408 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe153000/0x0/0x4ffc00000, data 0x36941/0x79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 46 handle_osd_map epochs [47,48], i have 46, src has [1,48]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=36) [2] r=0 lpr=36 crt=44'64 lcod 44'63 mlcod 44'63 active+clean] exit Started/Primary/Active/Clean 42.993061 25 0.000069
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=36) [2] r=0 lpr=36 crt=44'64 lcod 44'63 mlcod 44'63 active mbc={}] exit Started/Primary/Active 42.994247 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=36) [2] r=0 lpr=36 crt=44'64 lcod 44'63 mlcod 44'63 active mbc={}] exit Started/Primary 43.775537 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=36) [2] r=0 lpr=36 crt=44'64 lcod 44'63 mlcod 44'63 active mbc={}] exit Started 43.775726 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=36) [2] r=0 lpr=36 crt=44'64 lcod 44'63 mlcod 44'63 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 44'63 active pruub 96.626022339s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.2(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.3(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.4(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.5(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.6(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.7(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.8(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.9(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.a(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.b(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.c(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.d(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.e(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.f(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.10(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.11(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.12(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.13(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.14(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.15(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.16(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.17(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.18(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.19(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1a(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1b(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1c(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1d(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1e(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1f(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 96.626022339s@ mbc={}] exit Reset 0.001773 2 0.000343
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 96.626022339s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 96.626022339s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 96.626022339s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 96.626022339s@ mbc={}] exit Start 0.000090 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 96.626022339s@ mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 96.626022339s@ mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 96.626022339s@ mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 96.626022339s@ mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 96.626022339s@ mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 96.626022339s@ mbc={}] exit Started/Primary/Peering/GetLog 0.000032 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 96.626022339s@ mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 96.626022339s@ mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 96.626022339s@ mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.001935 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.001881 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000157 2 0.000076
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000041 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002628 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000267 2 0.000071
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002463 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000115 2 0.000101
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002332 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000096 2 0.000068
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000074 2 0.000041
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002474 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002493 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000044 2 0.000136
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002613 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000171 2 0.000144
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000039 2 0.000038
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003489 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003288 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003134 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003311 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003593 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.003778 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003583 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003406 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003390 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003549 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003040 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003028 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002991 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002976 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002957 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002952 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002940 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000610 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000615 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000030 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000625 2 0.000037
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000632 2 0.000047
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002979 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000087 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003057 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000671 2 0.000050
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000710 2 0.000036
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000699 2 0.000033
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003105 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003117 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000736 2 0.000034
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000753 2 0.000034
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000026 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000768 2 0.000030
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000025 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000017 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003394 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000923 2 0.000035
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000927 2 0.000032
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000961 2 0.000033
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000974 2 0.000031
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000975 2 0.000034
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003442 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.001002 2 0.000031
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000017 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.001041 2 0.000075
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000029 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.001065 2 0.000033
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000046 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.001067 2 0.000080
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000052 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000990 2 0.000112
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000057 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000061 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.001027 2 0.000034
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000928 2 0.000058
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000691 2 0.000311
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000025 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000088 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000719 2 0.000032
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000047 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000514 2 0.000233
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000052 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 58368000 unmapped: 1425408 heap: 59793408 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 48 handle_osd_map epochs [48,49], i have 48, src has [1,49]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.594823 4 0.000066
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.594887 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.595088 4 0.000215
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.595274 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.597916 4 0.000132
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.598018 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596385 4 0.000077
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596425 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.595930 4 0.000132
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596023 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.595469 4 0.000078
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596372 4 0.000077
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596431 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.595640 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.595971 4 0.000064
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596022 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596764 4 0.000728
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.597459 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.597795 4 0.000101
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.597859 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.595408 4 0.000102
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.595461 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.595640 4 0.000048
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.595673 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596978 4 0.000721
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.597677 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.595841 4 0.000084
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.595754 4 0.000052
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.595899 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.595811 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.595538 4 0.000576
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596040 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596412 4 0.000054
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596445 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596998 4 0.000077
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.597056 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.598264 4 0.000056
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.598301 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596779 4 0.000069
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596815 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596896 4 0.000093
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596964 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.598459 4 0.000070
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.598505 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596769 4 0.000080
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596818 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.598284 4 0.000092
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.598332 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.598074 4 0.000133
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.598175 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 96.626022339s@ mbc={}] exit Started/Primary/Peering/WaitUpThru 0.599709 3 0.000311
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596909 4 0.000117
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596985 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 96.626022339s@ mbc={}] exit Started/Primary/Peering 0.599852 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 96.626022339s@ mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.597221 4 0.000189
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.597383 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596645 4 0.000049
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596678 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596669 4 0.000064
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596720 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596572 4 0.000131
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596437 4 0.000210
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596580 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596684 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001558 3 0.000139
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002833 3 0.000036
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002820 3 0.000039
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002976 3 0.000045
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003050 3 0.000038
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003036 3 0.000293
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003039 3 0.000035
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003257 3 0.000047
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003258 3 0.000072
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003258 3 0.000026
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003215 3 0.000029
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003231 3 0.000047
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003194 3 0.000035
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003194 3 0.000033
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003170 3 0.000034
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003171 3 0.000024
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003150 3 0.000032
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003135 3 0.000022
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003120 3 0.000036
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003106 3 0.000026
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003078 3 0.000026
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003093 3 0.000033
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003416 3 0.000169
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003099 3 0.000031
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003111 3 0.000065
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003058 3 0.000042
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003030 3 0.000052
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002982 3 0.000032
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002957 3 0.000035
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000049 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004075 3 0.000161
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000190 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003981 3 0.000041
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003362 3 0.000186
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.922587395s of 10.002945900s, submitted: 197
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 58662912 unmapped: 1130496 heap: 59793408 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 394302 data_alloc: 218103808 data_used: 36864
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 49 heartbeat osd_stat(store_statfs(0x4fe14a000/0x0/0x4ffc00000, data 0x3ba96/0x82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 58679296 unmapped: 1114112 heap: 59793408 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 49 handle_osd_map epochs [50,50], i have 49, src has [1,50]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000021
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001535 1 0.000459
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000110 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000043 1 0.000070
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000054 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000094 1 0.000172
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000037
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000046 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000103 1 0.000407
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000282 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000109
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000175 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000103 1 0.000344
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000027 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000038
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000077 1 0.000111
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000032 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000035
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000045 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000063 1 0.000131
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000205 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000033
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000113 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000091 1 0.000209
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000027 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000096 1 0.000028
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000018 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000020
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000037
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000053 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000057 1 0.000191
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000249 1 0.000263
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000086 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000207
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.028220 1 0.000029
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.029844 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.624770 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.624797 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026311 1 0.000012
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.029417 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.625109 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.625124 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973402023s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222656250s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973369598s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] exit Reset 0.000047 1 0.000077
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973369598s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973369598s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973369598s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973369598s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973369598s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active+clean] exit Started/Primary/Active/Clean 2.026719 1 0.000021
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary/Active 2.029725 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary 2.627752 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started 2.627769 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.d] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973158836s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.222549438s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.d] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973132133s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.222549438s@ mbc={}] exit Reset 0.000038 1 0.000063
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973132133s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.222549438s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973132133s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.222549438s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973132133s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.222549438s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973132133s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.222549438s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973132133s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.222549438s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971399307s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.220855713s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.025792 1 0.000018
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.029882 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.626319 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.626351 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.b] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972857475s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222450256s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.b] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972840309s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222450256s@ mbc={}] exit Reset 0.000030 1 0.000130
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972840309s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222450256s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972840309s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222450256s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972840309s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222450256s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972840309s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222450256s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972840309s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222450256s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026809 1 0.000013
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.029884 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.625916 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.625929 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972937584s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222656250s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972923279s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] exit Reset 0.000025 1 0.000045
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972923279s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972923279s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972923279s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972923279s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972923279s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026864 1 0.000431
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.029935 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.627402 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.627417 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972858429s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222679138s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972841263s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222679138s@ mbc={}] exit Reset 0.000032 1 0.000050
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972841263s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222679138s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972841263s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222679138s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972841263s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222679138s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972841263s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222679138s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972841263s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222679138s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026370 1 0.000028
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026677 1 0.000012
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.029959 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.627828 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.627848 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972868919s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222824097s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972855568s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222824097s@ mbc={}] exit Reset 0.000023 1 0.000041
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972855568s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222824097s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972855568s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222824097s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972855568s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222824097s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972855568s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222824097s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972855568s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222824097s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.029848 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.627652 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.627686 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026724 1 0.000016
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.029971 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.625880 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.625900 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973010063s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223129272s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026756 1 0.000017
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.029975 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.626434 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.626446 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.7] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972805977s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222976685s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.7] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972786903s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222976685s@ mbc={}] exit Reset 0.000027 1 0.000038
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972786903s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222976685s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972786903s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222976685s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972786903s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222976685s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972786903s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222976685s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972786903s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222976685s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1a] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972826004s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222915649s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1a] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972607613s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222915649s@ mbc={}] exit Reset 0.000231 1 0.000254
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972607613s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222915649s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972607613s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222915649s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972607613s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222915649s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972607613s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222915649s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026898 1 0.000019
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972607613s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222915649s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.030078 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.626905 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.626921 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972652435s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223014832s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972627640s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223014832s@ mbc={}] exit Reset 0.000040 1 0.000067
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972763062s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223129272s@ mbc={}] exit Reset 0.000305 1 0.000465
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972763062s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223129272s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972763062s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223129272s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972763062s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223129272s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972627640s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223014832s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972763062s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223129272s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026999 1 0.000013
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972627640s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223014832s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972627640s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223014832s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.030157 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972627640s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223014832s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.627132 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972627640s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223014832s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.627147 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972572327s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223045349s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972557068s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223045349s@ mbc={}] exit Reset 0.000028 1 0.000048
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972557068s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223045349s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972557068s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223045349s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972557068s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223045349s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972557068s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223045349s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972557068s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223045349s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.027046 1 0.000018
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.030216 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.628747 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.628762 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active+clean] exit Started/Primary/Active/Clean 2.027114 1 0.000013
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary/Active 2.030240 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972480774s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223052979s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary 2.627067 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started 2.627079 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972464561s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223052979s@ mbc={}] exit Reset 0.000028 1 0.000084
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972464561s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223052979s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.9] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972464561s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223052979s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972464561s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223052979s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972475052s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.223068237s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972464561s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223052979s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972464561s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223052979s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.9] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972452164s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223068237s@ mbc={}] exit Reset 0.000034 1 0.000047
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972763062s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223129272s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972452164s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223068237s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972452164s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223068237s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972452164s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223068237s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active+clean] exit Started/Primary/Active/Clean 2.026962 1 0.000249
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972452164s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223068237s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972452164s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223068237s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary/Active 2.030299 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary 2.628483 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started 2.628498 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972394943s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.223098755s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972373962s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223098755s@ mbc={}] exit Reset 0.000032 1 0.000050
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972373962s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223098755s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972373962s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223098755s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972373962s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223098755s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972373962s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223098755s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972373962s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223098755s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.027192 1 0.000015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.030313 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.627307 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.627324 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972376823s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223136902s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.027210 1 0.000011
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.030292 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972361565s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223136902s@ mbc={}] exit Reset 0.000035 1 0.000041
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.627693 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972361565s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223136902s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.627718 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972361565s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223136902s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972361565s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223136902s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972361565s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223136902s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972361565s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223136902s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972357750s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223175049s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972344398s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223175049s@ mbc={}] exit Reset 0.000023 1 0.000038
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972344398s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223175049s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972344398s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223175049s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972344398s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223175049s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972344398s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223175049s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972344398s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223175049s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active+clean] exit Started/Primary/Active/Clean 2.027204 1 0.000069
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary/Active 2.030320 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary 2.627025 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started 2.627038 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972307205s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.223182678s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972287178s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223182678s@ mbc={}] exit Reset 0.000030 1 0.000047
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972287178s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223182678s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972287178s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223182678s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972287178s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223182678s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972287178s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223182678s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972287178s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223182678s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026858 1 0.000016
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.030719 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.627783 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.627797 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.6] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972043037s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222984314s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.6] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972032547s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222984314s@ mbc={}] exit Reset 0.000020 1 0.000700
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972032547s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222984314s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972032547s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222984314s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972032547s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222984314s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972032547s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222984314s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972032547s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222984314s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active+clean] exit Started/Primary/Active/Clean 2.027756 1 0.000011
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary/Active 2.030779 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary 2.627524 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started 2.627549 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971755981s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.223205566s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971536636s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223205566s@ mbc={}] exit Reset 0.000250 1 0.000318
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971536636s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223205566s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971536636s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223205566s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971536636s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223205566s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971536636s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223205566s@ mbc={}] exit Start 0.000048 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971536636s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223205566s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.027765 1 0.000023
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.031214 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.627981 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.628019 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.16] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972020149s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223991394s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.16] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971908569s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223991394s@ mbc={}] exit Reset 0.000154 1 0.000207
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971908569s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223991394s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971908569s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223991394s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971908569s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223991394s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971908569s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223991394s@ mbc={}] exit Start 0.000048 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971908569s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223991394s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.028928 1 0.000016
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.031918 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.628508 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000115 1 0.000025
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.628575 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.17] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.970626831s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223220825s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.17] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971295357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.220855713s@ mbc={}] exit Reset 0.003439 1 0.000402
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971295357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.220855713s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971295357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.220855713s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971295357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.220855713s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971295357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.220855713s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971295357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.220855713s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000016 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.970546722s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223220825s@ mbc={}] exit Reset 0.000095 1 0.000127
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.970546722s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223220825s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.970546722s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223220825s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.970546722s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223220825s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.970546722s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223220825s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.970546722s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223220825s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000125 1 0.000026
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000029 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000035
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000059 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000031 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000037
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000097 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000187
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000024 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000085 1 0.000021
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001051 1 0.000467
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000020 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000130
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000580 1 0.000031
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000012
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000337 1 0.000026
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000022 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000075 1 0.000115
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000024 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000012
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000048 1 0.000020
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000066
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000048 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000083 1 0.000192
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000036
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000121 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000073 1 0.000214
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000030
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000048 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000057 1 0.000143
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000080 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000022
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000111 1 0.000023
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000019 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000072 1 0.000019
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000058 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000024 1 0.000137
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000101 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000173 1 0.000333
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000277 1 0.000292
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000435 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000084 1 0.000747
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000029 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000061
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000056 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000144
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000116 1 0.000030
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000024 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000038
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000062 1 0.000058
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000022 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000041 1 0.000049
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000219 1 0.000023
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000203 1 0.000211
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000155 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000010
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000024 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 50 handle_osd_map epochs [50,50], i have 50, src has [1,50]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000927 1 0.000044
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020906 2 0.000054
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019589 2 0.000056
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019581 2 0.000787
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018698 2 0.000053
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018441 2 0.000057
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017545 2 0.000090
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017155 2 0.000111
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016818 2 0.000115
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016163 2 0.000046
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.b] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018061 2 0.000029
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.b] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1a] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1a] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000015 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003736 1 0.000261
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000034
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.d] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.7] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.d] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.7] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.9] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.16] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.9] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.17] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020448 2 0.000060
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.6] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014993 2 0.000106
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.16] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013304 2 0.000043
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013698 2 0.000502
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013193 2 0.001043
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012514 2 0.000041
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012159 2 0.000287
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011873 2 0.000024
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011737 2 0.000064
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011295 2 0.000097
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009845 2 0.000072
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010267 2 0.000765
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009613 2 0.000051
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008749 2 0.000102
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015011 2 0.000028
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.17] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.6] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007976 2 0.000053
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007423 2 0.000027
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.007724 2 0.000773
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006867 2 0.004512
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000026 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006784 2 0.000028
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007383 2 0.000638
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003775 2 0.000028
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002450 2 0.000024
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002552 2 0.000068
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 50 heartbeat osd_stat(store_statfs(0x4fe14a000/0x0/0x4ffc00000, data 0x3ba96/0x82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 50 handle_osd_map epochs [50,51], i have 50, src has [1,51]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 50 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 50 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894665 2 0.000023
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.917167 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894745 2 0.000015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.915269 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.892125 2 0.000021
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.907301 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895103 2 0.000013
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.914943 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895207 2 0.000016
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.914069 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909659 6 0.000081
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895364 2 0.000016
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.913944 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.893170 2 0.000019
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.913741 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.910123 6 0.000352
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894761 2 0.000048
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.909948 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.897280 2 0.000013
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.914982 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894868 2 0.000015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.909173 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.897435 2 0.000032
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.914679 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.897379 2 0.000035
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.913644 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.913291 6 0.000123
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894989 2 0.000019
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894299 2 0.000044
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.905454 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.905773 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894344 2 0.000014
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.901825 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895245 2 0.000016
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.907771 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895478 2 0.000026
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.908614 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.897433 2 0.000013
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.915082 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.896522 2 0.000025
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.915645 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895088 2 0.000031
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.903200 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895654 2 0.000017
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.904795 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894855 2 0.000019
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.901219 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895832 2 0.000017
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.905543 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895897 2 0.000015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.905878 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895960 2 0.000016
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.907389 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.896389 2 0.000016
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.908216 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895680 2 0.000012
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.900416 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.896590 2 0.000017
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.908652 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895764 2 0.000013
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.898296 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.896819 2 0.000018
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.910631 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.896962 2 0.000048
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.910397 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.896355 2 0.000016
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.903929 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.896383 2 0.000028
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.904218 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.896623 2 0.000018
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.903667 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.913711 7 0.000166
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.914648 7 0.000103
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005989 4 0.000118
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005470 4 0.000170
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005438 4 0.000084
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000047 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005276 4 0.000109
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005870 4 0.000244
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005195 4 0.000073
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005776 4 0.000181
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.917319 7 0.000227
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.915575 7 0.000498
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.916989 7 0.000071
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000032 1 0.000027
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005791 4 0.000036
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005676 4 0.000045
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005686 4 0.000035
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000651 1 0.000015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.16] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005742 4 0.000201
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005673 4 0.000107
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005690 4 0.000125
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000018 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005545 4 0.000262
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005291 4 0.000967
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005250 4 0.000023
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005203 4 0.000026
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005145 4 0.000100
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001001 1 0.000020
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005581 4 0.000080
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.918980 7 0.000043
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.916087 7 0.000620
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.916168 7 0.003158
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.918672 7 0.000090
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000037 1 0.000023
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000082 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.918421 7 0.000035
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000179 1 0.000009
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.17] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000252 1 0.000021
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000368 1 0.000010
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.7] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000505 1 0.000025
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.918887 7 0.000093
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000049 1 0.000250
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007207 4 0.000328
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009104 4 0.000038
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008809 4 0.001432
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009284 4 0.000037
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008694 4 0.000088
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008722 4 0.000228
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008678 4 0.000024
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009442 4 0.000037
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008587 4 0.000074
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008493 4 0.000082
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000034 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000052 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008381 4 0.000047
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.008273 5 0.000148
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007994 4 0.000108
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010301 4 0.000393
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000020 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.923946 7 0.000208
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.922765 7 0.000092
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000313 1 0.000026
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009398 4 0.000087
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.924763 7 0.000083
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.924128 7 0.000029
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.923148 7 0.000059
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.924389 7 0.000042
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.923723 7 0.000404
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.923814 7 0.000035
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.014662 3 0.000032
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.014679 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.008309 1 0.000033
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.008387 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.925894 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.013455 1 0.000038
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.014134 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.929819 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.16] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.020456 1 0.000018
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.021476 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.938491 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.027809 1 0.000017
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.027872 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.946879 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.035056 1 0.000026
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.035259 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.951371 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.17] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 60981248 unmapped: 909312 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.042442 1 0.000023
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.042722 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.958918 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.049689 1 0.000257
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.050079 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.968771 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.7] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.056769 1 0.000097
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.057306 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.975755 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.064026 1 0.000076
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.064109 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.983226 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.122765 1 0.000079
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.122832 1 0.000022
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.b] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.122880 1 0.000010
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.122532 1 0.000027
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.122611 1 0.000010
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.122689 1 0.000015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.6] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.122756 1 0.000014
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.122821 1 0.000022
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.122861 1 0.000079
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1a] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.121632 1 0.000055
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.9] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.007576 1 0.000066
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.130446 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.054440 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.b] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.014827 1 0.000098
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.137781 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.060576 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022195 1 0.000063
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.144774 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.069573 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.029520 1 0.000065
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.152170 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.076317 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036779 1 0.000059
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.159525 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.082705 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.6] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044192 1 0.000056
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.166997 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.091410 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.051450 1 0.000059
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.174324 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.098087 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.058859 1 0.000051
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.181772 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.105648 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1a] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.132806 2 0.000095
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.254475 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started 1.179323 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.9] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.329761 3 0.000053
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.329810 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000072 1 0.000116
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.013894 2 0.000115
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.014038 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started 1.253571 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.397407 3 0.000028
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.397433 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000046 1 0.000054
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.d] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.008547 2 0.000163
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.008657 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started 1.319428 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.d] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.605091 2 0.000054
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.605117 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000054 1 0.000057
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.008728 2 0.000184
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.008985 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started 1.527929 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.897400 2 0.000025
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.897440 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000036 1 0.000064
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.008619 2 0.000160
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.008691 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started 1.820810 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 51 handle_osd_map epochs [51,52], i have 51, src has [1,52]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 61153280 unmapped: 1785856 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 61153280 unmapped: 1785856 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 52 handle_osd_map epochs [52,53], i have 52, src has [1,53]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 1695744 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 416168 data_alloc: 218103808 data_used: 36864
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 53 heartbeat osd_stat(store_statfs(0x4fe13e000/0x0/0x4ffc00000, data 0x43651/0x8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 53 handle_osd_map epochs [54,54], i have 53, src has [1,54]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 53 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 61267968 unmapped: 1671168 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 54 handle_osd_map epochs [54,55], i have 54, src has [1,55]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 61300736 unmapped: 1638400 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 55 handle_osd_map epochs [55,56], i have 55, src has [1,56]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 61104128 unmapped: 1835008 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 56 heartbeat osd_stat(store_statfs(0x4fe134000/0x0/0x4ffc00000, data 0x48bd5/0x98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 61112320 unmapped: 1826816 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 61095936 unmapped: 1843200 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 429363 data_alloc: 218103808 data_used: 40960
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.878599167s of 10.974405289s, submitted: 283
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 61095936 unmapped: 1843200 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 57 handle_osd_map epochs [58,59], i have 57, src has [1,59]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=0 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000074 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=0 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000023 1 0.000045
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000081 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000116 1 0.000182
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000042 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000309 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=0 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=0 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000019
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000089 1 0.000034
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000127 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=0 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=0 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000021
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000089 1 0.000030
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000129 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=0 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000024 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=0 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000082 1 0.000040
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000112 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 59 handle_osd_map epochs [59,60], i have 59, src has [1,60]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 59 handle_osd_map epochs [60,60], i have 60, src has [1,60]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.557621 2 0.000049
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.557959 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.559552 2 0.000216
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.558006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.558886 2 0.000046
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.560090 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.560213 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.559257 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.559273 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.557072 2 0.000036
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.557262 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000101 1 0.000314
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000117 1 0.000366
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.557306 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.001618 1 0.001864
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000793 1 0.001617
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000162 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000229 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 794624 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 62177280 unmapped: 761856 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 60 heartbeat osd_stat(store_statfs(0x4fe127000/0x0/0x4ffc00000, data 0x4fb3b/0xa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 60 handle_osd_map epochs [61,61], i have 60, src has [1,61]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.16( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.561043 5 0.000038
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.16( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.16( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.6( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.559191 5 0.000495
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.6( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.6( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.1e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.561221 5 0.000031
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.1e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.1e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.559428 5 0.000493
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: not registered w/ OSD
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.6( v 44'389 lc 40'63 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002377 4 0.000069
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.6( v 44'389 lc 40'63 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.6( v 44'389 lc 40'63 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000043 1 0.000065
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.6( v 44'389 lc 40'63 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: not registered w/ OSD
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: not registered w/ OSD
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.042730 1 0.000024
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.e( v 44'389 lc 40'53 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.044961 4 0.000057
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.e( v 44'389 lc 40'53 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.e( v 44'389 lc 40'53 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000105 1 0.000033
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.e( v 44'389 lc 40'53 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038535 1 0.000086
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.16( v 44'389 lc 40'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.084030 4 0.000149
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.16( v 44'389 lc 40'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.16( v 44'389 lc 40'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000030 1 0.000042
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.16( v 44'389 lc 40'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.031563 1 0.000024
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.1e( v 44'389 lc 40'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.115602 4 0.000085
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.1e( v 44'389 lc 40'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.1e( v 44'389 lc 40'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000035 1 0.000054
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.1e( v 44'389 lc 40'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038624 1 0.000021
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 491520 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 61 handle_osd_map epochs [61,62], i have 61, src has [1,62]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.499024 1 0.000024
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.582700 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.142447 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000081 1 0.000116
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.428795 1 0.000029
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.583127 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.144369 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000053 1 0.000083
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.538195 1 0.000025
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.583420 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.142994 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.468096 1 0.000027
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000336 1 0.000366
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.583781 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.144850 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000202 1 0.000177
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 62 handle_osd_map epochs [62,62], i have 62, src has [1,62]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003119 2 0.000036
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004247 2 0.000033
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=12
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=12
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001161 2 0.000104
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000813 2 0.000037
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000020 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004643 2 0.000086
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.005329 2 0.000032
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000306 2 0.000037
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=10
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=10
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000279 2 0.000044
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000037 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=0 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=0 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000016
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000089 1 0.000037
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000021 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000121 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 62 handle_osd_map epochs [61,62], i have 62, src has [1,62]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=0 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=0 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000016
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000063 1 0.000027
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000094 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=0 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=0 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000019
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000101 1 0.000049
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000152 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=0 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000059 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=0 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000014
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000080 1 0.000029
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000106 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe125000/0x0/0x4ffc00000, data 0x5192d/0xa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 294912 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490619 data_alloc: 218103808 data_used: 53248
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 62 handle_osd_map epochs [62,63], i have 63, src has [1,63]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996693 2 0.000093
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001844 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995919 2 0.000090
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001605 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996998 2 0.000051
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001351 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.805797 2 0.000042
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.805925 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.805970 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.806560 2 0.000039
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.806691 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.806711 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000079 1 0.000189
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.805752 2 0.000062
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.805932 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.805963 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000058 1 0.000089
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.805631 2 0.000032
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.805748 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.805761 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000034 1 0.000048
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996628 2 0.000040
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001617 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000537 1 0.000547
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002062 3 0.000076
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002378 3 0.000065
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002226 3 0.000165
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 63 handle_osd_map epochs [63,63], i have 63, src has [1,63]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002987 3 0.000053
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 63 handle_osd_map epochs [63,63], i have 63, src has [1,63]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 278528 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.17( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=3 mbc={}] exit Started/Stray 1.529094 5 0.000036
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.17( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=3 mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.17( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=3 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.529290 5 0.000035
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.7( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.528972 5 0.000039
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.7( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.7( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.529691 5 0.000062
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: not registered w/ OSD
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: not registered w/ OSD
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: not registered w/ OSD
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002504 4 0.000102
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000032 1 0.000021
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: not registered w/ OSD
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.021680 1 0.000020
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.1f( v 44'389 lc 40'183 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.024014 4 0.000137
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.1f( v 44'389 lc 40'183 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.1f( v 44'389 lc 40'183 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000042 1 0.000047
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.1f( v 44'389 lc 40'183 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038613 1 0.000018
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.f( v 44'389 lc 40'44 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.062900 4 0.000178
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.f( v 44'389 lc 40'44 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.f( v 44'389 lc 40'44 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000051 1 0.000059
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.f( v 44'389 lc 40'44 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052913 1 0.000023
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.7( v 44'389 lc 40'47 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.115952 4 0.000533
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.7( v 44'389 lc 40'47 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.7( v 44'389 lc 40'47 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000043 1 0.000074
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.7( v 44'389 lc 40'47 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038479 1 0.000029
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 64 handle_osd_map epochs [64,65], i have 64, src has [1,65]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.259196 1 0.000024
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.321919 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 1.851631 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000052 1 0.000079
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.167718 1 0.000024
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.322271 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 1.851326 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.206438 1 0.000021
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.322378 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 1.851715 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000083 1 0.000115
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000050 1 0.000082
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.298388 1 0.000018
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.322666 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 1.851783 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000039 1 0.000083
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003759 2 0.000089
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004361 2 0.000030
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004122 2 0.000023
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 65 handle_osd_map epochs [65,65], i have 65, src has [1,65]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004092 2 0.000038
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=6
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=6
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000568 2 0.000044
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000533 2 0.000024
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000562 2 0.000013
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=13
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=13
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000542 2 0.000155
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 16384 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 65 handle_osd_map epochs [65,66], i have 66, src has [1,66]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997420 2 0.000034
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002358 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997573 2 0.000021
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002348 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997701 2 0.000025
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002415 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997871 2 0.000032
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002255 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001688 3 0.000087
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003229 3 0.000096
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000020 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/53 les/c/f=66/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003104 3 0.000173
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/53 les/c/f=66/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/53 les/c/f=66/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/53 les/c/f=66/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003746 3 0.000463
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000056 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 1015808 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 66 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x58661/0xba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 958464 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 958464 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 537656 data_alloc: 218103808 data_used: 53248
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 950272 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.402991295s of 10.517934799s, submitted: 132
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=0 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000049 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=0 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000022 1 0.000049
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000285 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000522 1 0.000382
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=0 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=0 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000034
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000099 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000069 1 0.000192
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000043 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.001566 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000039 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.001159 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fe10d000/0x0/0x4ffc00000, data 0x5bdac/0xc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=0 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=0 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000012
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000159 1 0.000030
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000627 2 0.000033
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 67 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 67 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.819200 2 0.001043
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.820798 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.821121 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000057 1 0.000083
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.339253 2 0.000044
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.340083 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.819583 2 0.001105
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.820793 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.820937 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000162 1 0.000221
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000122 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=67/43 les/c/f=68/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001843 3 0.000109
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=67/43 les/c/f=68/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=67/43 les/c/f=68/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=67/43 les/c/f=68/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 1826816 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 1785856 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.674800 5 0.000209
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.675883 5 0.000043
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: not registered w/ OSD
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 lc 39'37 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001643 4 0.000127
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 lc 39'37 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 lc 39'37 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000061 1 0.000074
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 lc 39'37 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: not registered w/ OSD
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035375 1 0.000028
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 lc 40'62 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.037209 4 0.000098
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 lc 40'62 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 lc 40'62 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000076 1 0.000041
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 lc 40'62 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052698 1 0.000056
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.296500 1 0.000019
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.333653 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.009584 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.243603 1 0.000041
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000049 1 0.000075
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.333752 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.008737 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000077 1 0.000213
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000067 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.000148
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000028 1 0.000240
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000772 3 0.000028
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000754 3 0.000191
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000044 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 1605632 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 70 handle_osd_map epochs [70,71], i have 71, src has [1,71]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995667 2 0.000042
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995827 2 0.000128
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996738 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996792 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=70/71 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=70/71 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=70/71 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=70/71 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=70/71 n=6 ec=45/34 lis/c=70/45 les/c/f=71/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001887 3 0.000195
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=70/71 n=6 ec=45/34 lis/c=70/45 les/c/f=71/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=70/71 n=6 ec=45/34 lis/c=70/45 les/c/f=71/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=70/71 n=6 ec=45/34 lis/c=70/45 les/c/f=71/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=70/71 n=5 ec=45/34 lis/c=70/45 les/c/f=71/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002102 3 0.000447
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=70/71 n=5 ec=45/34 lis/c=70/45 les/c/f=71/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=70/71 n=5 ec=45/34 lis/c=70/45 les/c/f=71/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=70/71 n=5 ec=45/34 lis/c=70/45 les/c/f=71/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 1400832 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 571771 data_alloc: 218103808 data_used: 53248
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 71 handle_osd_map epochs [71,71], i have 71, src has [1,71]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 71 handle_osd_map epochs [71,71], i have 71, src has [1,71]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 1392640 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fe0ff000/0x0/0x4ffc00000, data 0x62b09/0xce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 1392640 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 1392640 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 1384448 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fe100000/0x0/0x4ffc00000, data 0x62b09/0xce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 1507328 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 572039 data_alloc: 218103808 data_used: 53248
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fe100000/0x0/0x4ffc00000, data 0x62b09/0xce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 1499136 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 1499136 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fe100000/0x0/0x4ffc00000, data 0x62b09/0xce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.437329292s of 11.483425140s, submitted: 49
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 71 handle_osd_map epochs [72,72], i have 72, src has [1,72]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 1449984 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 1441792 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 1425408 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 581003 data_alloc: 218103808 data_used: 61440
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 1417216 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fe0f8000/0x0/0x4ffc00000, data 0x66254/0xd4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1a deep-scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1a deep-scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 1417216 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.13 deep-scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.13 deep-scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 1409024 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fe0fa000/0x0/0x4ffc00000, data 0x66254/0xd4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 1409024 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 1409024 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 585919 data_alloc: 218103808 data_used: 69632
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 1392640 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=0 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000053 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=0 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000021
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000210 1 0.000042
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000037 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000272 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=0 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000020 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=0 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000012
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000087 1 0.000028
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000123 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.794003 2 0.000046
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.794153 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.794174 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000082 1 0.000115
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.794479 2 0.000070
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.794767 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.794790 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000049 1 0.000070
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 76 handle_osd_map epochs [76,76], i have 76, src has [1,76]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 1343488 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 1318912 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 76 heartbeat osd_stat(store_statfs(0x4fe0ee000/0x0/0x4ffc00000, data 0x6b723/0xdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.473957062s of 10.500686646s, submitted: 23
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.1c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.158959 5 0.000050
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.1c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.1c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.158752 5 0.000042
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.1c( v 44'389 lc 40'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001838 4 0.000103
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.1c( v 44'389 lc 40'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.1c( v 44'389 lc 40'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000037 1 0.000026
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.1c( v 44'389 lc 40'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.049742 1 0.000067
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.c( v 44'389 lc 40'79 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.051615 4 0.000167
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.c( v 44'389 lc 40'79 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.c( v 44'389 lc 40'79 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000039 1 0.000018
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.c( v 44'389 lc 40'79 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038766 1 0.000025
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 1212416 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.002783 1 0.000025
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.054478 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.213465 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000060 1 0.000088
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.964109 1 0.000026
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.054597 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.213405 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000033 1 0.000049
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001773 2 0.000023
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 78 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002442 2 0.000045
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=10
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=10
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000908 2 0.000043
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000326 2 0.000027
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 78 heartbeat osd_stat(store_statfs(0x4fe0ec000/0x0/0x4ffc00000, data 0x6d4e3/0xe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 1212416 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 623120 data_alloc: 218103808 data_used: 69632
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003170 2 0.000583
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006541 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005375 2 0.000075
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008220 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/45 les/c/f=79/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001672 3 0.000405
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/45 les/c/f=79/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/45 les/c/f=79/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/45 les/c/f=79/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=6 ec=45/34 lis/c=78/45 les/c/f=79/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001253 3 0.000237
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=6 ec=45/34 lis/c=78/45 les/c/f=79/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=6 ec=45/34 lis/c=78/45 les/c/f=79/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=6 ec=45/34 lis/c=78/45 les/c/f=79/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 79 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 79 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x70973/0xe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 1122304 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 1089536 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.5 deep-scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.5 deep-scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1073152 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x70973/0xe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1073152 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1056768 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 626872 data_alloc: 218103808 data_used: 81920
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x70973/0xe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1048576 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1048576 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x70973/0xe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 1015808 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.050789833s of 10.089987755s, submitted: 36
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=0 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=0 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000022
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000107 1 0.000038
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.001156 2 0.000039
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 966656 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 81 handle_osd_map epochs [82,82], i have 82, src has [1,82]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004278 2 0.000075
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.005606 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.001474 4 0.000118
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000055 1 0.000048
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.126213 2 0.000028
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 942080 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 643794 data_alloc: 218103808 data_used: 94208
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 83 heartbeat osd_stat(store_statfs(0x4fe0db000/0x0/0x4ffc00000, data 0x76032/0xf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 917504 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 917504 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 909312 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 909312 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 892928 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 656476 data_alloc: 218103808 data_used: 94208
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 85 heartbeat osd_stat(store_statfs(0x4fe0d1000/0x0/0x4ffc00000, data 0x7b2a9/0xfb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 909312 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=0 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=0 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000048
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000063 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000108 1 0.000159
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000048 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000224 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 892928 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 86 handle_osd_map epochs [87,87], i have 87, src has [1,87]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.268419 2 0.000141
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.268704 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.268804 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000081 1 0.000120
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 884736 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.012294769s of 10.046145439s, submitted: 36
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.005988 6 0.000032
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 lc 40'116 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002557 3 0.000119
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 lc 40'116 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 lc 40'116 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000059 1 0.000071
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 lc 40'116 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035601 1 0.000056
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 1826816 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.971497 1 0.000023
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.009805 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.015831 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000282 1 0.000346
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000097 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000195
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001813 3 0.000058
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 89 heartbeat osd_stat(store_statfs(0x4fe0c4000/0x0/0x4ffc00000, data 0x81d8d/0x108000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 1818624 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 677611 data_alloc: 218103808 data_used: 94208
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996627 2 0.000053
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998565 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 90 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000934 4 0.000132
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 1810432 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 90 heartbeat osd_stat(store_statfs(0x4fe0c0000/0x0/0x4ffc00000, data 0x837c0/0x10b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 1802240 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 1802240 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 90 heartbeat osd_stat(store_statfs(0x4fe0c3000/0x0/0x4ffc00000, data 0x837c0/0x10b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 1794048 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 1777664 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 680004 data_alloc: 218103808 data_used: 94208
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 1769472 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 91 heartbeat osd_stat(store_statfs(0x4fcf1f000/0x0/0x4ffc00000, data 0x8533d/0x10e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 1720320 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 1712128 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.018945694s of 10.037871361s, submitted: 24
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 1679360 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 1671168 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 687308 data_alloc: 218103808 data_used: 106496
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 60.196275 96 0.000210
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 60.199326 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 61.200964 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active mbc={}] exit Started 61.200993 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=11.804219246s) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 active pruub 172.308624268s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=11.803787231s) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 172.308624268s@ mbc={}] exit Reset 0.000468 1 0.000544
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=11.803787231s) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 172.308624268s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=11.803787231s) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 172.308624268s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=11.803787231s) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 172.308624268s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=11.803787231s) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 172.308624268s@ mbc={}] exit Start 0.000087 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=11.803787231s) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 172.308624268s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 93 handle_osd_map epochs [92,93], i have 93, src has [1,93]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 1638400 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.011377 3 0.000205
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.011521 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000069 1 0.000101
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000031
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000038 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fcf19000/0x0/0x4ffc00000, data 0x88a37/0x114000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 1646592 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 94 handle_osd_map epochs [94,95], i have 95, src has [1,95]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003318 4 0.000071
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003426 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.002309 5 0.000590
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000099 1 0.000042
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000961 1 0.000024
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.028382 2 0.000039
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 1581056 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.974208 1 0.000057
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 1.006516 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 2.009961 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 2.009983 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=14.996132851s) [0] async=[0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 44'389 active pruub 178.522598267s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=14.996060371s) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.522598267s@ mbc={}] exit Reset 0.000103 1 0.000153
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=14.996060371s) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.522598267s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=14.996060371s) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.522598267s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=14.996060371s) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.522598267s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=14.996060371s) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.522598267s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=14.996060371s) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.522598267s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 1564672 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.015636 7 0.000115
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000066 1 0.000101
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] lb MIN local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=-1 lpr=96 DELETING pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030914 2 0.000163
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] lb MIN local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.031029 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] lb MIN local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.046737 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 1564672 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 698729 data_alloc: 218103808 data_used: 106496
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 1564672 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fcf0d000/0x0/0x4ffc00000, data 0x8f4c7/0x11f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 1556480 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.c deep-scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.c deep-scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 1556480 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.029078484s of 10.060076714s, submitted: 62
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19(unlocked)] enter Initial
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=0 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000042 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=0 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000023
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000120 1 0.000040
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000027 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000193 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 1548288 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 99 handle_osd_map epochs [99,100], i have 100, src has [1,100]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.004986 2 0.000104
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.005213 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.005233 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000048 1 0.000073
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 1540096 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 709476 data_alloc: 218103808 data_used: 106496
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.002803 6 0.000028
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 lc 40'64 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001994 3 0.000112
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 lc 40'64 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 lc 40'64 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000075 1 0.000031
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 lc 40'64 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.049736 1 0.000026
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fcaf0000/0x0/0x4ffc00000, data 0x96232/0x12c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 1474560 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.964505 1 0.000053
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.016393 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.019219 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000049 1 0.000075
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000022 1 0.000027
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000704 3 0.000055
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 1441792 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005720 2 0.000099
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006536 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=102/103 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=102/103 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=102/103 n=5 ec=45/34 lis/c=102/54 les/c/f=103/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002081 3 0.000257
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=102/103 n=5 ec=45/34 lis/c=102/54 les/c/f=103/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=102/103 n=5 ec=45/34 lis/c=102/54 les/c/f=103/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=102/103 n=5 ec=45/34 lis/c=102/54 les/c/f=103/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fcaec000/0x0/0x4ffc00000, data 0x97c7a/0x12f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 1417216 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 1417216 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fcaeb000/0x0/0x4ffc00000, data 0x997ff/0x132000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=78) [2] r=0 lpr=78 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 44.339956 79 0.000297
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=78) [2] r=0 lpr=78 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 44.341829 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=78) [2] r=0 lpr=78 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 45.348401 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=78) [2] r=0 lpr=78 crt=44'389 mlcod 0'0 active mbc={}] exit Started 45.348423 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=78) [2] r=0 lpr=78 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=11.659957886s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 active pruub 186.385116577s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=11.659915924s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.385116577s@ mbc={}] exit Reset 0.000197 1 0.000274
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=11.659915924s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.385116577s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=11.659915924s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.385116577s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=11.659915924s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.385116577s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=11.659915924s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.385116577s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=11.659915924s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.385116577s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fcae7000/0x0/0x4ffc00000, data 0x9b37c/0x135000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 1400832 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 735086 data_alloc: 218103808 data_used: 106496
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.878089 3 0.000070
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.878125 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000104 1 0.000134
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000026 1 0.000032
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000026 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 1351680 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 105 handle_osd_map epochs [105,106], i have 106, src has [1,106]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001786 4 0.000082
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002070 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.001932 5 0.001016
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000134 1 0.000052
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000246 1 0.000077
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.049519 2 0.000039
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 106 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.511452 1 0.000083
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 0.563574 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 1.565773 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 1.565788 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.438099861s) [0] async=[0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 44'389 active pruub 192.607360840s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.438027382s) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 192.607360840s@ mbc={}] exit Reset 0.000090 1 0.000125
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.438027382s) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 192.607360840s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.438027382s) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 192.607360840s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.438027382s) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 192.607360840s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.438027382s) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 192.607360840s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.438027382s) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 192.607360840s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 262144 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.003164 7 0.000106
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000063 1 0.000061
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] lb MIN local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=-1 lpr=107 DELETING pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.052981 2 0.000192
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] lb MIN local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.053092 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] lb MIN local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.056309 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 262144 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.656579971s of 10.708313942s, submitted: 55
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 196608 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 108 heartbeat osd_stat(store_statfs(0x4fcada000/0x0/0x4ffc00000, data 0xa1d8c/0x140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 172032 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 735624 data_alloc: 218103808 data_used: 106496
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 172032 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 163840 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 108 heartbeat osd_stat(store_statfs(0x4fcade000/0x0/0x4ffc00000, data 0xa1d8c/0x140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 163840 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 163840 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 84.378048 147 0.001390
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 84.381458 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 85.383087 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active mbc={}] exit Started 85.383116 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=11.622039795s) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 active pruub 196.307983398s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=11.621926308s) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 196.307983398s@ mbc={}] exit Reset 0.000167 1 0.000246
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=11.621926308s) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 196.307983398s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=11.621926308s) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 196.307983398s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=11.621926308s) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 196.307983398s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=11.621926308s) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 196.307983398s@ mbc={}] exit Start 0.000046 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=11.621926308s) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 196.307983398s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.516129 3 0.000280
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.516319 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000161 1 0.000222
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000125 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000034 1 0.000368
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000046 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1196032 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 743444 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996549 4 0.000225
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 83.039293 144 0.000253
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 83.041055 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 84.043435 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=44'389 mlcod 0'0 active mbc={}] exit Started 84.043463 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.996895 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=12.962300301s) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 active pruub 199.162155151s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=12.962102890s) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 199.162155151s@ mbc={}] exit Reset 0.000246 1 0.000326
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=12.962102890s) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 199.162155151s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=12.962102890s) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 199.162155151s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=12.962102890s) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 199.162155151s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=12.962102890s) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 199.162155151s@ mbc={}] exit Start 0.000044 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=12.962102890s) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 199.162155151s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 111 handle_osd_map epochs [110,111], i have 111, src has [1,111]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 111 handle_osd_map epochs [110,111], i have 111, src has [1,111]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 111 heartbeat osd_stat(store_statfs(0x4fcad2000/0x0/0x4ffc00000, data 0xa6f0c/0x149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.127425 5 0.000459
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000053 1 0.000040
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000257 1 0.000022
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035393 2 0.000033
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 1163264 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 111 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.969091 3 0.000176
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.969182 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000051 1 0.000070
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 1 0.000027
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.807058 1 0.000049
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 0.970551 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 1.967494 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 1.967755 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=15.156332970s) [0] async=[0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 44'389 active pruub 202.327163696s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=15.156107903s) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 202.327163696s@ mbc={}] exit Reset 0.000660 1 0.000865
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=15.156107903s) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 202.327163696s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=15.156107903s) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 202.327163696s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=15.156107903s) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 202.327163696s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=15.156107903s) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 202.327163696s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=15.156107903s) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 202.327163696s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 112 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1155072 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 112 heartbeat osd_stat(store_statfs(0x4fcace000/0x0/0x4ffc00000, data 0xa898e/0x14c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 112 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001375 4 0.000063
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.001469 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.003923 7 0.000216
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000045 1 0.000047
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: not registered w/ OSD
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] lb MIN local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=-1 lpr=112 DELETING pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038346 2 0.000156
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] lb MIN local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.038435 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] lb MIN local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.042395 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: not registered w/ OSD
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.b deep-scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.b deep-scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1146880 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.519079 5 0.000168
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000059 1 0.000032
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000615 1 0.000015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035629 2 0.000030
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.492867 1 0.000057
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 1.048423 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 2.049922 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 2.049988 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.470242500s) [1] async=[1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 44'389 active pruub 204.689910889s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.469997406s) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 204.689910889s@ mbc={}] exit Reset 0.000564 1 0.000695
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.469997406s) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 204.689910889s@ mbc={}] enter Started
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.469997406s) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 204.689910889s@ mbc={}] enter Start
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.469997406s) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 204.689910889s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.469997406s) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 204.689910889s@ mbc={}] exit Start 0.000156 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.469997406s) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 204.689910889s@ mbc={}] enter Started/Stray
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 114 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 1130496 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.922721863s of 10.958656311s, submitted: 45
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006232 7 0.000324
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000047 1 0.000039
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=-1 lpr=114 DELETING pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038257 2 0.000131
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.038339 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.044784 0 0.000000
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcac7000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 1114112 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 744418 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 1114112 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1097728 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcac7000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 1064960 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 1064960 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1056768 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 744418 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1056768 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 1048576 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcac7000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 1040384 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 1032192 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 1007616 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 744488 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 1007616 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.8 deep-scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.148889542s of 12.159530640s, submitted: 9
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.8 deep-scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 999424 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 999424 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 991232 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 991232 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 747930 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 983040 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 950272 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 950272 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 950272 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 925696 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 749077 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 925696 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 917504 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.113793373s of 10.123807907s, submitted: 10
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 917504 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 917504 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 917504 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 752523 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 917504 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 909312 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 901120 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 892928 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 868352 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 753672 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 860160 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 860160 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 860160 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.883275986s of 11.894712448s, submitted: 8
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 860160 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 843776 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 755970 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.1c deep-scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.1c deep-scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 843776 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 835584 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 835584 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 827392 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 827392 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 759415 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 827392 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 819200 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 819200 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.992584229s of 10.003332138s, submitted: 9
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 811008 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 802816 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 761711 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.5 deep-scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.5 deep-scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 802816 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 802816 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 794624 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 786432 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 761856 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 764006 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 761856 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 753664 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 753664 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.025296211s of 10.035103798s, submitted: 7
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 753664 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.15 deep-scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.15 deep-scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 720896 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766304 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 720896 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 704512 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 704512 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 704512 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 688128 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768601 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 688128 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 688128 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 679936 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 679936 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 679936 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 769749 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 671744 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.934693336s of 12.946996689s, submitted: 10
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 671744 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.6 deep-scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.6 deep-scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 663552 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 663552 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 638976 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 772044 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 638976 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 630784 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 614400 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 614400 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 606208 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 774339 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 606208 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 606208 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.027474403s of 11.042146683s, submitted: 8
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 598016 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 598016 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 581632 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 776633 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 581632 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 573440 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 573440 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 565248 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 565248 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 778927 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 557056 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 548864 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 548864 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.975472450s of 10.992065430s, submitted: 12
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 540672 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 524288 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782369 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 524288 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 516096 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 516096 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 516096 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 499712 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783517 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 499712 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 491520 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 491520 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 483328 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 491520 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 491520 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 483328 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 483328 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 475136 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 475136 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 475136 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 466944 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 466944 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 466944 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 442368 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 442368 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 442368 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 434176 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 434176 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 434176 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 417792 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 417792 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 409600 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 409600 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 409600 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 401408 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 401408 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 401408 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 393216 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 393216 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 385024 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 385024 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 385024 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 376832 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 368640 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 368640 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 360448 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 360448 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 352256 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 352256 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 352256 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 344064 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 344064 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 344064 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 335872 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 335872 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 327680 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 327680 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 327680 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 319488 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 319488 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 311296 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 311296 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 311296 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 303104 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 303104 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 294912 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 294912 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 294912 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 286720 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 286720 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 286720 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 278528 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 278528 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 278528 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 278528 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 278528 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 278528 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 270336 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 270336 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 262144 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 262144 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 262144 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 253952 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 253952 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 253952 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 245760 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 245760 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 237568 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 237568 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 237568 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 229376 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 229376 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 221184 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 221184 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 221184 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 204800 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 204800 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 196608 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 204800 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 204800 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 196608 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 196608 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 196608 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 188416 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 188416 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 188416 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 180224 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 180224 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 172032 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 172032 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 172032 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 163840 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 163840 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 155648 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 155648 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 155648 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 147456 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 139264 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 131072 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 131072 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 131072 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 122880 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 122880 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 122880 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 114688 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 114688 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 106496 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 98304 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 98304 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 90112 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 90112 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 90112 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 73728 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 65536 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 57344 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 57344 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 57344 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 49152 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 49152 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 49152 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 40960 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 40960 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 32768 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 32768 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 24576 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 24576 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 24576 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 16384 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 24576 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 24576 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 16384 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 16384 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 16384 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 8192 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 8192 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 0 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 0 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 0 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1040384 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1040384 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 1032192 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 1032192 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 1032192 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 1024000 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 1024000 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 1024000 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 1015808 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 1032192 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 1032192 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 1024000 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 1024000 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 1015808 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 1015808 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 1015808 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 1007616 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 1007616 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 999424 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 999424 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 999424 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 991232 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 991232 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 983040 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 983040 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 983040 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 974848 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 974848 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 974848 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 966656 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 966656 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 958464 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 958464 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 950272 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 950272 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 950272 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 942080 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 942080 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 942080 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 933888 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 933888 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 925696 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 925696 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 917504 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 917504 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 917504 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 909312 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 909312 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 901120 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 901120 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 901120 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 892928 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 892928 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 892928 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 884736 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 884736 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 884736 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 868352 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 860160 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 860160 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 860160 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 851968 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 851968 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 851968 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 843776 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 843776 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 835584 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 835584 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 835584 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 827392 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 827392 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 827392 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 819200 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 819200 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 811008 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 811008 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 811008 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 802816 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 802816 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 802816 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 794624 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 794624 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 786432 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 786432 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 778240 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 770048 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 770048 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 770048 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 761856 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 745472 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 737280 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 737280 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 737280 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 729088 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 729088 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 729088 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 720896 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 720896 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 712704 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 720896 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 720896 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 712704 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 712704 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 712704 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 704512 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 704512 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 696320 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 696320 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 696320 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 679936 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 679936 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 671744 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 671744 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 671744 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 671744 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 663552 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 663552 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 655360 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 655360 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 647168 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 647168 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 647168 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 638976 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 638976 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 630784 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 630784 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 630784 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 622592 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 622592 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 614400 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 614400 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 614400 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 606208 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 606208 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 589824 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 589824 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 589824 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 581632 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 581632 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 565248 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 565248 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 565248 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 557056 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 557056 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 565248 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 557056 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 557056 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 557056 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 548864 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 548864 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 540672 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 540672 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 540672 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 532480 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 524288 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 524288 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 516096 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 516096 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 516096 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 516096 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 516096 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 516096 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 507904 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 507904 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 5527 writes, 23K keys, 5527 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5527 writes, 849 syncs, 6.51 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5527 writes, 23K keys, 5527 commit groups, 1.0 writes per commit group, ingest: 18.26 MB, 0.03 MB/s#012Interval WAL: 5527 writes, 849 syncs, 6.51 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 434176 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 434176 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 434176 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 425984 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 425984 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 425984 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 417792 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 417792 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 409600 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 409600 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 409600 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 401408 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 385024 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 385024 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 376832 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 376832 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 368640 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 352256 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 352256 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 344064 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 344064 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 344064 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 335872 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 335872 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 327680 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 335872 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 335872 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 327680 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 327680 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 319488 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 319488 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 311296 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 311296 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 311296 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 303104 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 303104 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 303104 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 294912 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 294912 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 294912 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 286720 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 286720 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 278528 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 278528 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 364.351928711s of 364.360015869s, submitted: 6
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 2007040 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 2007040 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 2007040 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 2007040 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 2007040 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 2007040 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 2007040 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 2007040 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 1998848 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 1998848 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69337088 unmapped: 1990656 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69337088 unmapped: 1990656 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69345280 unmapped: 1982464 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69345280 unmapped: 1982464 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69345280 unmapped: 1982464 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 1974272 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 1974272 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 1966080 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 1966080 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 1966080 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 1957888 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 1949696 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69386240 unmapped: 1941504 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69386240 unmapped: 1941504 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69394432 unmapped: 1933312 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69394432 unmapped: 1933312 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69402624 unmapped: 1925120 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69410816 unmapped: 1916928 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 1908736 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 1900544 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 1900544 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 1900544 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 1892352 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 1892352 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 1892352 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 1884160 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 1875968 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 1875968 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 1875968 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 1867776 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 1867776 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69468160 unmapped: 1859584 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69468160 unmapped: 1859584 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69476352 unmapped: 1851392 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69476352 unmapped: 1851392 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69476352 unmapped: 1851392 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69484544 unmapped: 1843200 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69484544 unmapped: 1843200 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69484544 unmapped: 1843200 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 1835008 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 1835008 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1826816 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1826816 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1826816 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 1818624 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 1818624 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 1810432 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 1810432 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 1810432 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 1802240 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 1802240 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 1794048 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 1794048 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 1785856 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 1785856 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 1785856 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 1777664 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 1777664 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 1769472 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 1769472 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 1769472 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69566464 unmapped: 1761280 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69566464 unmapped: 1761280 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69566464 unmapped: 1761280 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1753088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1753088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1753088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1753088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1753088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1753088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1753088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1753088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1753088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1728512 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1728512 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1728512 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1728512 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1728512 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1720320 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1720320 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1720320 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1720320 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1720320 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 1712128 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1687552 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1687552 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1687552 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1687552 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1662976 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1662976 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1662976 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1662976 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1662976 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69672960 unmapped: 1654784 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1646592 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1646592 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1646592 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1646592 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1646592 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1646592 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1646592 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1646592 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1646592 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: mgrc ms_handle_reset ms_handle_reset con 0x5640f0959c00
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3303149021
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3303149021,v1:192.168.122.100:6801/3303149021]
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: mgrc handle_mgr_configure stats_period=5
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1556480 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1556480 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1556480 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1556480 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1556480 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1548288 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1548288 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1548288 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1548288 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1548288 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1531904 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1531904 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1531904 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1531904 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1531904 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1523712 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1523712 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1523712 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 07:58:05 np0005536586 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
